tensorflow – show overfitting with epocs & batch size

I was looking at this example from ml5.js. I am trying to build a small script with this where I can show users the effects of changing batchSizes or number of epochs. However, in this case, even though I change the value, the classification is correct and gives a high confidence of 0.9999 always. Is there any way I can modify this such that I am able to clearly demonstrate the affects of having smaller/larger epochs and batch sizes?

Any examples would help.

let nn;

const options = {
  inputs: 1,
  outputs: 2,
  task: 'classification',
  debug: true
}

function setup(){
  createCanvas(400, 400);
  nn = ml5.neuralNetwork(options);


  console.log(nn)
  createTrainingData();
  nn.normalizeData();

  const trainingOptions={
    batchSize: 24,
    epochs: 32,
  }
  
  nn.train(trainingOptions,finishedTraining); // if you want to change the training options
  // nn.train(finishedTraining); // use the default training options
}

function finishedTraining(){

  nn.classify((300), function(err, result){
    console.log("RESULT", result);
  })
}


function createTrainingData(){
    for(let i = 0; i < 400; i++){
      if(i%2 === 0){
        const x = random(0, width/2);
        nn.addData( (x),  ('left'))
      } else {
        const x = random(width/2, width);
        nn.addData( (x),  ('right'))
      }
    }
}

p5 editor link:

https://editor.p5js.org/ml5/sketches/NeuralNetwork_Simple_Classification

tfrecord – training over TPU with tensorflow

When I executed

data = tfds.load("cycle_gan/monet2photo", try_gcs=True)
for i in data('trainA'):
    print(type(i))

When I try this without tpu, I get images in jpg formats but with tpu I am getting tfrecords using and ended up at below error

---------------------------------------------------------------------------
AttributeError                            Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/tensorflow/python/data/ops/iterator_ops.py in _next_internal(self)
    761         # Fast path for the case `self._structure` is not a nested structure.
--> 762         return self._element_spec._from_compatible_tensor_list(ret)  # pylint: disable=protected-access
    763       except AttributeError:

AttributeError: 'dict' object has no attribute '_from_compatible_tensor_list'

During handling of the above exception, another exception occurred:

UnimplementedError                        Traceback (most recent call last)
14 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/context.py in execution_mode(mode)
   2101       ctx.executor = executor_new
-> 2102       yield
   2103     finally:

/usr/local/lib/python3.6/dist-packages/tensorflow/python/data/ops/iterator_ops.py in _next_internal(self)
    763       except AttributeError:
--> 764         return structure.from_compatible_tensor_list(self._element_spec, ret)
    765 

/usr/local/lib/python3.6/dist-packages/tensorflow/python/data/util/structure.py in from_compatible_tensor_list(element_spec, tensor_list)
    229       lambda spec, value: spec._from_compatible_tensor_list(value),
--> 230       element_spec, tensor_list)
    231 

/usr/local/lib/python3.6/dist-packages/tensorflow/python/data/util/structure.py in _from_tensor_list_helper(decode_fn, element_spec, tensor_list)
    204     value = tensor_list(i:i + num_flat_values)
--> 205     flat_ret.append(decode_fn(component_spec, value))
    206     i += num_flat_values

/usr/local/lib/python3.6/dist-packages/tensorflow/python/data/util/structure.py in <lambda>(spec, value)
    228   return _from_tensor_list_helper(
--> 229       lambda spec, value: spec._from_compatible_tensor_list(value),
    230       element_spec, tensor_list)

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/tensor_spec.py in _from_compatible_tensor_list(self, tensor_list)
    176     assert len(tensor_list) == 1
--> 177     tensor_list(0).set_shape(self._shape)
    178     return tensor_list(0)

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in set_shape(self, shape)
   1205   def set_shape(self, shape):
-> 1206     if not self.shape.is_compatible_with(shape):
   1207       raise ValueError(

/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in shape(self)
   1166         # `EagerTensor`, in C.
-> 1167         self._tensor_shape = tensor_shape.TensorShape(self._shape_tuple())
   1168       except core._NotOkStatusException as e:

UnimplementedError: File system scheme '(local)' not implemented (file: '/root/tensorflow_datasets/cycle_gan/monet2photo/2.0.0/cycle_gan-trainA.tfrecord-00000-of-00001')

During handling of the above exception, another exception occurred:

UnimplementedError                        Traceback (most recent call last)
<ipython-input-13-71aa8855ccd2> in <module>()
----> 1 for i in data('trainA'):
      2     example = tf.train.Example()
      3     example.ParseFromString(i)

/usr/local/lib/python3.6/dist-packages/tensorflow/python/data/ops/iterator_ops.py in __next__(self)
    734 
    735   def __next__(self):  # For Python 3 compatibility
--> 736     return self.next()
    737 
    738   def _next_internal(self):

/usr/local/lib/python3.6/dist-packages/tensorflow/python/data/ops/iterator_ops.py in next(self)
    770   def next(self):
    771     try:
--> 772       return self._next_internal()
    773     except errors.OutOfRangeError:
    774       raise StopIteration

/usr/local/lib/python3.6/dist-packages/tensorflow/python/data/ops/iterator_ops.py in _next_internal(self)
    762         return self._element_spec._from_compatible_tensor_list(ret)  # pylint: disable=protected-access
    763       except AttributeError:
--> 764         return structure.from_compatible_tensor_list(self._element_spec, ret)
    765 
    766   @property

/usr/lib/python3.6/contextlib.py in __exit__(self, type, value, traceback)
     97                 value = type()
     98             try:
---> 99                 self.gen.throw(type, value, traceback)
    100             except StopIteration as exc:
    101                 # Suppress StopIteration *unless* it's the same exception that

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/context.py in execution_mode(mode)
   2103     finally:
   2104       ctx.executor = executor_old
-> 2105       executor_new.wait()
   2106 
   2107 

/usr/local/lib/python3.6/dist-packages/tensorflow/python/eager/executor.py in wait(self)
     65   def wait(self):
     66     """Waits for ops dispatched in this executor to finish."""
---> 67     pywrap_tfe.TFE_ExecutorWaitForAllPendingNodes(self._handle)
     68 
     69   def clear_error(self):

UnimplementedError: File system scheme '(local)' not implemented (file: '/root/tensorflow_datasets/cycle_gan/monet2photo/2.0.0/cycle_gan-trainA.tfrecord-00000-of-00001')

data is

{'testA': <DatasetV1Adapter shapes: {image: (None, None, 3), label: ()}, types: {image: tf.uint8, label: tf.int64}>,
 'testB': <DatasetV1Adapter shapes: {image: (None, None, 3), label: ()}, types: {image: tf.uint8, label: tf.int64}>,
 'trainA': <DatasetV1Adapter shapes: {image: (None, None, 3), label: ()}, types: {image: tf.uint8, label: tf.int64}>,
 'trainB': <DatasetV1Adapter shapes: {image: (None, None, 3), label: ()}, types: {image: tf.uint8, label: tf.int64}>}

Files associated with the download

os.listdir('/root/tensorflow_datasets/cycle_gan/monet2photo/2.0.0/')
('dataset_info.json',
 'cycle_gan-testB.tfrecord-00000-of-00001',
 'label.labels.txt',
 'cycle_gan-trainB.tfrecord-00000-of-00002',
 'image.image.json',
 'cycle_gan-trainA.tfrecord-00000-of-00001',
 'cycle_gan-trainB.tfrecord-00001-of-00002',
 'cycle_gan-testA.tfrecord-00000-of-00001')

I want to train cyclegan over the dataset using data so
Please help me how to preprocess this dataset to get images as tensors to continue?

tensorflow 2.3.0,
python 3.6.9

Object Detection on Android with Tensorflow Lite(custom model)

Trying to implement a custom object detection model with Tensorflow Lite, using Android Studio. I am following the guidance provided here: Running on mobile with TensorFlow Lite, however with no success. The example model runs properly showing all the detected labels. Nonetheless, when I try with my custom model I am not getting any labels at all. I have also tried with other models(from the internet but the outcome is the same). It is like that the labels are not being passed with the write way. I copied my detect.tflite and labelmap.txt, I changed the TF_OD_API_INPUT_SIZE and the TF_OD_API_IS_QUANTIZED in the DetectorActivity.java but still not getting results(detected class with a bounding box and a score).

The Logcat shows the following:

2020-10-11 18:37:54.315 31681-31681/org.tensorflow.lite.examples.detection E/HAL: PATH3 /odm/lib64/hw/gralloc.qcom.so
2020-10-11 18:37:54.315 31681-31681/org.tensorflow.lite.examples.detection E/HAL: PATH2 /vendor/lib64/hw/gralloc.qcom.so
2020-10-11 18:37:54.315 31681-31681/org.tensorflow.lite.examples.detection E/HAL: PATH1 /system/lib64/hw/gralloc.qcom.so
2020-10-11 18:37:54.315 31681-31681/org.tensorflow.lite.examples.detection E/HAL: PATH3 /odm/lib64/hw/gralloc.msm8953.so
2020-10-11 18:37:54.315 31681-31681/org.tensorflow.lite.examples.detection E/HAL: PATH2 /vendor/lib64/hw/gralloc.msm8953.so
2020-10-11 18:37:54.315 31681-31681/org.tensorflow.lite.examples.detection E/HAL: PATH1 /system/lib64/hw/gralloc.msm8953.so
2020-10-11 18:37:54.859 31681-31681/org.tensorflow.lite.examples.detection E/tensorflow: CameraActivity: Exception!
    java.lang.IllegalStateException: This model does not contain associated files, and is not a Zip file.
        at org.tensorflow.lite.support.metadata.MetadataExtractor.assertZipFile(MetadataExtractor.java:325)
        at org.tensorflow.lite.support.metadata.MetadataExtractor.getAssociatedFile(MetadataExtractor.java:165)
        at org.tensorflow.lite.examples.detection.tflite.TFLiteObjectDetectionAPIModel.create(TFLiteObjectDetectionAPIModel.java:118)
        at org.tensorflow.lite.examples.detection.DetectorActivity.onPreviewSizeChosen(DetectorActivity.java:96)
        at org.tensorflow.lite.examples.detection.CameraActivity.onPreviewFrame(CameraActivity.java:200)
        at android.hardware.Camera$EventHandler.handleMessage(Camera.java:1157)
        at android.os.Handler.dispatchMessage(Handler.java:102)
        at android.os.Looper.loop(Looper.java:165)
        at android.app.ActivityThread.main(ActivityThread.java:6375)
        at java.lang.reflect.Method.invoke(Native Method)
        at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:912)
        at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:802)

How can I take the detection? Do I need an additional file(metadata) relative with the labels or I am doing sth else wrong?
The above case is tested with an Android 7 device, (on Andoid 10 the application crashes). Thanks!

python – Error al importar tensorflow

Instale tensorflow con

pip3 install --upgrade tensorflow

Se me descargo e instalo todo correctamente, pero al intentar importar me da el siguiente error:

    Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:UsersEloyanaconda3libsite-packagestensorflow__init__.py", line 41, in <module>
    from tensorflow.python.tools import module_util as _module_util
  File "C:UsersEloyanaconda3libsite-packagestensorflowpython__init__.py", line 40, in <module>
    from tensorflow.python.eager import context
  File "C:UsersEloyanaconda3libsite-packagestensorflowpythoneagercontext.py", line 32, in <module>
    from tensorflow.core.framework import function_pb2
  File "C:UsersEloyanaconda3libsite-packagestensorflowcoreframeworkfunction_pb2.py", line 16, in <module>
    from tensorflow.core.framework import attr_value_pb2 as tensorflow_dot_core_dot_framework_dot_attr__value__pb2
  File "C:UsersEloyanaconda3libsite-packagestensorflowcoreframeworkattr_value_pb2.py", line 16, in <module>
    from tensorflow.core.framework import tensor_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__pb2
  File "C:UsersEloyanaconda3libsite-packagestensorflowcoreframeworktensor_pb2.py", line 16, in <module>
    from tensorflow.core.framework import resource_handle_pb2 as tensorflow_dot_core_dot_framework_dot_resource__handle__pb2
  File "C:UsersEloyanaconda3libsite-packagestensorflowcoreframeworkresource_handle_pb2.py", line 16, in <module>
    from tensorflow.core.framework import tensor_shape_pb2 as tensorflow_dot_core_dot_framework_dot_tensor__shape__pb2
  File "C:UsersEloyanaconda3libsite-packagestensorflowcoreframeworktensor_shape_pb2.py", line 112, in <module>
    '__module__' : 'tensorflow.core.framework.tensor_shape_pb2'
TypeError: expected bytes, Descriptor found

Tengo python 3.7.6 y tensorflow 2.3 solo CPU

tensorflow federated – Learning rate setting when calling the function tff.learning.build_federated_averaging_process

I’m carrying out a federated learning process and use the function tff.learning.build_federated_averaging_process to create an iterative process of federated learning. As mentioned in the TFF tutorial, this function has two arguments called client_optimizer_fn and server_optimizer_fn, which in my opinion, represent the optimizer for client and server, respectively. But in the FedAvg paper, it seems that only clients carry out the optimization while the server only do the averaging operation, so what exactly is the server_optimizer_fn doing and what does its learning rate mean?

How do I save/export (as .tf or .tflite), run, or test this Tensorflow Convolutional Neural Network (CNN) which It trained as a python file?

How do I save, run, or test this Tensorflow Convolutional Neural Network (CNN) which It trained as a python file? I want to be able to export/save this model as a .tf and .tflite file as well as input images to test it.

Here is my code fo my model:

import tensorflow as tf
from tensorflow.examples.tutorials.mnist import input_data
import numpy as np

DATA_DIR = 'data'
NUM_STEPS = 1000
MINIBATCH_SIZE = 100

def weight_variable(shape):
    initial = tf.truncated_normal(shape, stddev = 0.1)
    return tf.Variable(initial)

def bias_variable(shape):
    initial = tf.constant(0.1, shape=shape)
    return tf.Variable(initial)

def conv2d(x, W):
    return tf.nn.conv2d(x, W, strides=(1, 1, 1, 1), padding='SAME')

def max_pool_2x2(x):
    return tf.nn.max_pool(x, ksize=(1,2,2,1), strides=(1,2,2,1), padding='SAME')

def conv_later(input, shape):
    W = weight_variable(shape)
    b = bias_variable((shape(3)))
    return tf.nn.relu(conv2d(input, W) + b)

def full_layer(input, size):
    in_size = int(input.get_shape()(1))
    W = weight_variable((in_size, size))
    b = bias_variable((size))
    return tf.matmul(input, W) + b

x = tf.placeholder(tf.float32, shape=(None, 784))
y_ = tf.placeholder(tf.float32, shape=(None, 10))

x_image = tf.reshape(x, (-1, 28, 28, 1))
conv1 = conv_later(x_image, shape=(5,5,1,32))
conv1_pool = max_pool_2x2(conv1)

conv2 = conv_later(conv1_pool, shape=(5,5,32,64))
conv2_pool = max_pool_2x2(conv2)

conv2_flat = tf.reshape(conv2_pool, (-1, 7*7*64))
full_1 = tf.nn.relu(full_layer(conv2_flat, 1024))

keep_prob = tf.placeholder(tf.float32)
full1_drop = tf.nn.dropout(full_1, keep_prob=keep_prob)

y_conv = full_layer(full1_drop, 10)

mnist = input_data.read_data_sets(DATA_DIR, one_hot=True)

cross_entropy = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(y_conv, y_))

train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)

correct_prediction = tf.equal(tf.argmax(y_conv, 1), tf.argmax(y_, 1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    
    for i in range(NUM_STEPS):
        batch = mnist.train.next_batch(50)
        
        if i % 100 == 0:
            train_accuracy = sess.run(accuracy, feed_dict={x: batch(0), y_: batch(1), keep_prob: 1.0})
            
            print("step {}, training accuracy {}".format(i, train_accuracy))
            
        sess.run(train_step, feed_dict={x: batch(0), y_: batch(1), keep_prob: 0.5})
        
    X = mnist.test.images.reshape(10, 1000, 784)
    Y = mnist.test.labels.reshape(10, 1000, 10)
    test_accuracy = np.mean((sess.run(accuracy, feed_dict={x:X(i), y_:Y(i), keep_prob:1.0}) for i in range(10)))
    
print("test accuracy: {}".format(test_accuracy))

Can someone please tell me how to save/export this model as a .tf or .tflite, and test this model?

tensorflow – Loading custom CTC layer from h5 file in Keras

I have a CTCLayer class like this:

class CTCLayer(layers.Layer):
def __init__(self, name=None):
    super().__init__(name=name)
    self.loss_fn = keras.backend.ctc_batch_cost


def call(self, y_true, y_pred):
    # Compute the training-time loss value and add it
    # to the layer using `self.add_loss()`.
    batch_len = tf.cast(tf.shape(y_true)(0), dtype="int64")
    input_length = tf.cast(tf.shape(y_pred)(1), dtype="int64")
    label_length = tf.cast(tf.shape(y_true)(1), dtype="int64")

    input_length = input_length * tf.ones(shape=(batch_len, 1), dtype="int64")
    label_length = label_length * tf.ones(shape=(batch_len, 1), dtype="int64")

    loss = self.loss_fn(y_true, y_pred, input_length, label_length)
    self.add_loss(loss)

    # At test time, just return the computed predictions
    return y_pred

I trained my model, saved it to a model.h5 file and loaded it through:

model_load = tf.keras.models.load_model('model.h5', custom_objects={'CTCLayer': CTCLayer})

It’s throwing a init() got an unexpected keyword argument ‘trainable’ error.

Since I don’t want to train my model again (time constraint), is there any workaround I can do to load the model without having to add a get_config() in the CTCLayer class?

And if not, how should I modify a get_config() in the class?

tensorflow – TfLite for Microcontrollers giving hybrid error

I converted my keras .h5 file to a quantized tflite in order to run on the new OpenMV Cam H7 plus but when I run it I get an error saying “Hybrid Models are not supported on TFLite Micro.”

I’m not sure why my model is appearing as hybrid; the code I used to convert is below:

model = load_model('inceptionV3.h5')

# Convert the model.
converter = tf.lite.TFLiteConverter.from_keras_model(model)
converter.optimizations = (tf.lite.Optimize.DEFAULT)
converter.target_ops = (tf.lite.OpsSet.TFLITE_BUILTINS_INT8)
tflite_model = converter.convert()

# Save the TF Lite model.
with tf.io.gfile.GFile('inceptionV3_openmv2.tflite', 'wb') as f:
  f.write(tflite_model)

I’d appreciate if someone could guide me if I’m doing something wrong or if there is a better way to convert it.

third party libraries – Should you use popular(e.g. OpenCV,boost, Eigen, Tensorflow) types on your interfaces?

Suppose you are writing a software where there is a popular existing library that does not have all the algorithms/features you want but provides some “vocabulary” (equivalents of std::vector/std::string for domain) types you could use.

Should you use that library and be tied to it or write your own types you use on for interface function signatures(with option to convert to 3rd party lib quickly – so there is no noticeable performance difference).

From what I see benefits of using 3rd party lib:

  • Cheaper(no development cost, only adoption/usage cost)
  • Probably much better documented/less bugs
  • Less spam in code void Do(MyX& x) { ThirdPartyX xtp(x); ThirdPartyAlg(xtp);...}
  • No surprises(if your types behave slightly different than 3rd party types new hires with experience with 3rd party may be surprised)

From what I see problems of using 3rd party lib:

  • Hard to switch away
  • Might not fit your problems perfectly/might make tradeoffs you do not like

I do not care about cost of installing/maintaining third party lib since of large projects cost is negligible.

Assume that 3rd party library is well maintained, so not some guy’s github repo that had last commit 5 years ago.

python – Problem installing TensorFlow through pip

I used the command pip install tensorflow on my cmd and it started downloading it, but it always ends with this error message:

ERROR: Could not install packages due to an EnvironmentError: [Errno
2] No such file or directory:
‘C:UsersxgxbrAppDataLocalPackagesPythonSoftwareFoundation.Python.3.8_qbz5n2kfra8p0LocalCachelocal-packagesPython38site-packagestensorboard_plugin_wit_vendortensorflow_servingsourcesstorage_path__pycache__file_system_storage_path_source_pb2.cpython-38.pyc’

How can I properly fix that? is something missing? I’ve downloaded the CUDA version for my machin, I don’t know if anything else is missing. Sorry for the amateur question.