GVKun编程网logo

InvalidArgumentError:无法将 dtype 资源的张量转换为 NumPy 数组(无法将位于的资源添加到web应用程序的缓存中)

3

在这篇文章中,我们将带领您了解InvalidArgumentError:无法将dtype资源的张量转换为NumPy数组的全貌,包括无法将位于的资源添加到web应用程序的缓存中的相关情况。同时,我们还将

在这篇文章中,我们将带领您了解InvalidArgumentError:无法将 dtype 资源的张量转换为 NumPy 数组的全貌,包括无法将位于的资源添加到web应用程序的缓存中的相关情况。同时,我们还将为您介绍有关cut 函数:无法根据规则“安全”将数组数据从 dtype('float64') 转换为 dtype('的知识,以帮助您更好地理解这个主题。

本文目录一览:

InvalidArgumentError:无法将 dtype 资源的张量转换为 NumPy 数组(无法将位于的资源添加到web应用程序的缓存中)

InvalidArgumentError:无法将 dtype 资源的张量转换为 NumPy 数组(无法将位于的资源添加到web应用程序的缓存中)

如何解决InvalidArgumentError:无法将 dtype 资源的张量转换为 NumPy 数组

我想使用 tf-hub 构建文本分类模型并导出为 tflite 模型但是
在转换包括 tf hub 的 tensorflow 模型时,我遇到了错误。请帮我解决。

  1. import tensorflow as tf
  2. import tensorflow_hub as hub
  3. model = tf.keras.Sequential()
  4. model.add(tf.keras.layers.InputLayer(dtype=tf.string,input_shape=()))
  5. model.add(hub.KerasLayer("https://tfhub.dev/google/tf2-preview/nnlm-en-dim50/1"))
  6. converter=tf.lite.TFLiteConverter.from_keras_model(model)
  7. tflite_model = converter.convert()

我尝试了 tf-lite python 和命令行 api。但是我遇到了 InvalidArgumentError。


  1. InvalidArgumentError Traceback (most recent call last)
  2. <ipython-input-15-5a8dbd778645> in <module>()
  3. 5 model.add(hub.KerasLayer("https://tfhub.dev/google/tf2-preview/nnlm-en-dim50/1"))
  4. 6 converter = tf.lite.TFLiteConverter.from_keras_model(model)
  5. ----> 7 tflite_model = converter.convert()
  6. 6 frames
  7. /usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/lite.py in convert(self)
  8. 850 frozen_func,graph_def = (
  9. 851 _convert_to_constants.convert_variables_to_constants_v2_as_graph(
  10. --> 852 self._funcs[0],lower_control_flow=False))
  11. 853
  12. 854 input_tensors = [
  13. /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/convert_to_constants.py in convert_variables_to_constants_v2_as_graph(func,lower_control_flow,aggressive_inlining)
  14. 1103 func=func,1104 lower_control_flow=lower_control_flow,-> 1105 aggressive_inlining=aggressive_inlining)
  15. 1106
  16. 1107 output_graph_def,converted_input_indices = _replace_variables_by_constants(
  17. /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/convert_to_constants.py in __init__(self,func,aggressive_inlining,variable_names_allowlist,variable_names_denylist)
  18. 804 variable_names_allowlist=variable_names_allowlist,805 variable_names_denylist=variable_names_denylist)
  19. --> 806 self._build_tensor_data()
  20. 807
  21. 808 def _build_tensor_data(self):
  22. /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/convert_to_constants.py in _build_tensor_data(self)
  23. 823 data = map_index_to_variable[idx].numpy()
  24. 824 else:
  25. --> 825 data = val_tensor.numpy()
  26. 826 self._tensor_data[tensor_name] = _TensorData(
  27. 827 numpy=data,/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in numpy(self)
  28. 1069 """
  29. 1070 # Todo(slebedev): Consider avoiding a copy for non-cpu or remote tensors.
  30. -> 1071 maybe_arr = self._numpy() # pylint: disable=protected-access
  31. 1072 return maybe_arr.copy() if isinstance(maybe_arr,np.ndarray) else maybe_arr
  32. 1073
  33. /usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in _numpy(self)
  34. 1037 return self._numpy_internal()
  35. 1038 except core._NotOkStatusException as e: # pylint: disable=protected-access
  36. -> 1039 six.raise_from(core._status_to_exception(e.code,e.message),None) # pylint: disable=protected-access
  37. 1040
  38. 1041 @property
  39. /usr/local/lib/python3.6/dist-packages/six.py in raise_from(value,from_value)
  40. InvalidArgumentError: Cannot convert a Tensor of dtype resource to a NumPy array.

解决方法

上次我检查时,TFLite 不支持查找表,这是 TF Hub 模型中资源张量的主要来源(除了变量,但这些肯定有效)。

cut 函数:无法根据规则“安全”将数组数据从 dtype('float64') 转换为 dtype('<U32')

cut 函数:无法根据规则“安全”将数组数据从 dtype('float64') 转换为 dtype('

如何解决cut 函数:无法根据规则“安全”将数组数据从 dtype(''float64'') 转换为 dtype(''<U32'')

我想将 Dataframe 中某一列的内容更改为“好”或“坏”。 该列填充了从 1 到 10 的数字。 1-5 是坏的,6-10 是好的。 为此,我想使用 cut 方法。

bins = (1,5.5,10)
rating = [''bad'',''good'']
game[''useropinion''] = pd.cut(rating,bins)

运行后的结果:

Cannot cast array data from dtype(''float64'') to dtype(''<U32'') according to the rule ''safe''

怎么了?我该如何解决?

解决方法

你可以这样做:

game[''useropinion''] = pd.cut(game[''useropinion''],bins,labels=rating)

编辑: 为了回答问题,您试图削减评级而不是用户意见数据,因此很自然地会得到 TypeError,因为评级是一个字符串数组,而您的 bin 是数字。 ''

InvalidArgumentError: 预期 'tf.Tensor(False, shape=(), dtype=bool)' 为真

InvalidArgumentError: 预期 'tf.Tensor(False, shape=(), dtype=bool)' 为真

如何解决InvalidArgumentError: 预期 ''tf.Tensor(False, shape=(), dtype=bool)'' 为真

在使用结构相似性指数比较图像之前,我使用 PCA 来减少图像的尺寸。使用PCA后,tf.image.ssim报错。

我在这里比较图像而不使用 PCA。这完美地工作 -

  1. import numpy as np
  2. import tensorflow as tf
  3. import time
  4. (x_train,y_train),(x_test,y_test) = tf.keras.datasets.mnist.load_data(
  5. path=''mnist.npz''
  6. )
  7. start = time.time()
  8. for i in range(1,6000):
  9. x_train_zero = np.expand_dims(x_train[0],axis=2)
  10. x_train_expanded = np.expand_dims(x_train[i],axis=2)
  11. print(tf.image.ssim(x_train_zero,x_train_expanded,255))
  12. print(time.time()-start)

我在这里应用了 PCA 来减少图像的尺寸,以便 SSIM 花费更少的时间来比较图像 -

  1. from sklearn.preprocessing import StandardScaler
  2. from sklearn.decomposition import PCA
  3. x_train = x_train.reshape(60000,-1)
  4. scaler = StandardScaler()
  5. X_scaled = scaler.fit_transform(x_train)
  6. pca = PCA()
  7. pca = PCA(n_components = 11)
  8. X_pca = pca.fit_transform(X_scaled).reshape(60000,11,1)
  9. start = time.time()
  10. for i in range(1,6000):
  11. X_pca_zero = np.expand_dims(X_pca[0],axis=2)
  12. X_pca_expanded = np.expand_dims(X_pca[i],axis=2)
  13. print(tf.image.ssim(X_pca_zero,X_pca_expanded,255))
  14. print(time.time()-start)

这段代码抛出错误 - InvalidArgumentError: Expected ''tf.Tensor(False,shape=(),dtype=bool)'' to be true。汇总数据:11,1,1 11

解决方法

所以,简而言之,发生该错误是因为在 cancel 中,输入 tf.image.ssimX_pca_zero 的大小与 X_pca_expanded 不匹配,如果您有 {{1} } 那么 filter_sizefilter_size=11 必须至少为 11x11,您可以如何更改代码的示例:

  1. X_pca_zero

InvalidArgumentError: 预期 'tf.Tensor(False, shape=(), dtype=bool)' 为真汇总数据:b'没有文件匹配模式:

InvalidArgumentError: 预期 'tf.Tensor(False, shape=(), dtype=bool)' 为真汇总数据:b'没有文件匹配模式:

如何解决InvalidArgumentError: 预期 ''tf.Tensor(False, shape=(), dtype=bool)'' 为真汇总数据:b''没有文件匹配模式:

当我在 Google colab 上运行以下代码时,

  1. table(selection)

我收到以下错误:

  1. tf.data.Dataset.list_files(''/content/gdrive/MyDrive/Experiment/train/*.jpg'')

过去两周我一直被这个问题困扰,请帮帮我。另外,在运行上述代码行之前,我已经成功安装了 Google 驱动器。

解决方法

enter image description here

这是我正在使用的示例。

  1. from google.colab import drive
  2. drive.mount(''/content/drive'')
  3. def load_image(filepath):
  4. raw_img = tf.io.read_file(filepath)
  5. img_tensor_int = tf.image.decode_jpeg(raw_img,channels=3)
  6. img_tensor_flt = tf.image.convert_image_dtype(img_tensor_int,tf.float32)
  7. return img_tensor_flt,img_tensor_flt
  8. def load_dataset(split):
  9. print(''/content/drive/MyDrive/CelebAsubset/''+split+''/*.jpg'')
  10. train_list_ds = tf.data.Dataset.list_files(''/content/drive/MyDrive/CelebAsubset/''+split+''/*.jpg'',shuffle=False)
  11. train_ds = train_list_ds.map(load_image)
  12. return train_ds
  13. train_ds = load_dataset(''train'')
  14. val_ds = load_dataset(''val'')
  15. test_ds = load_dataset(''test'')

InvalidArgumentError:您必须使用 dtype float 和 shape [?,1,19,1] Keras 为占位符张量“conv2d_17_input”提供一个值

InvalidArgumentError:您必须使用 dtype float 和 shape [?,1,19,1] Keras 为占位符张量“conv2d_17_input”提供一个值

如何解决InvalidArgumentError:您必须使用 dtype float 和 shape [?,1,19,1] Keras 为占位符张量“conv2d_17_input”提供一个值

我正在尝试使用 Keras 构建 GAN 模型,但遇到了一些错误消息的问题,InvalidArgumentError: You must Feed a value for placeholder tensor ''conv2d_17_input'' with dtype float and shape [?,1,19,1 ].

数据来自真实数据,并试图用它解决不平衡问题。这就是数据的形状与 32x32 等普通图像不同的原因。

如果有人知道这里发生了什么,请帮助我。真的很感激。

代码如下:

  1. class Data:
  2. """
  3. Define dataset for training GAN
  4. """
  5. def __init__(self,data,batch_size,z_input_dim):
  6. X_train,y_train,X_test,y_test = train_test_split(data.iloc[:,:-1].values,data.result.values,test_size=0.3)
  7. self.x_data = X_train
  8. self.x_data = self.x_data.reshape((self.x_data.shape[0],1) + (self.x_data.shape[1],1))
  9. self.batch_size = batch_size
  10. self.z_input_dim = z_input_dim
  11. def get_real_sample(self):
  12. """
  13. get real sample mnist images
  14. :return: batch_size number of mnist image data
  15. """
  16. return self.x_data[np.random.randint(0,self.x_data.shape[0],size=self.batch_size)]
  17. def get_z_sample(self,sample_size):
  18. """
  19. get z sample data
  20. :return: random z data (batch_size,z_input_dim) size
  21. """
  22. return np.random.uniform(-1.0,1.0,(sample_size,self.z_input_dim))

和 GAN 模型:

  1. class GAN:
  2. def __init__(self,learning_rate,z_input_dim):
  3. """
  4. init params
  5. :param learning_rate: learning rate of optimizer
  6. :param z_input_dim: input dim of z
  7. """
  8. self.learning_rate = learning_rate
  9. self.z_input_dim = z_input_dim
  10. self.D = self.discriminator()
  11. self.G = self.generator()
  12. self.GD = self.combined()
  13. def discriminator(self):
  14. """
  15. define discriminator
  16. """
  17. D = Sequential()
  18. D.add(Conv2D(128,(1,3),input_shape=(1,1),kernel_initializer=initializers.Randomnormal(stddev=0.02),data_format=''channels_last''))
  19. D.add(LeakyReLU(0.2))
  20. D.add(MaxPooling2D(pool_size=(1,2),strides=2,data_format=''channels_last''))
  21. D.add(Conv2D(256,padding=''same'',data_format=''channels_last''))
  22. D.add(Flatten())
  23. D.add(Dense(128))
  24. D.add(LeakyReLU(0.2))
  25. D.add(Dropout(0.3))
  26. D.add(Dense(1,activation=''sigmoid''))
  27. adam = Adam(lr=self.learning_rate,beta_1=0.5)
  28. D.compile(loss=''binary_crossentropy'',optimizer=adam,metrics=[''accuracy''])
  29. return D
  30. def generator(self):
  31. """
  32. define generator
  33. """
  34. G = Sequential()
  35. G.add(Dense(256,input_dim=self.z_input_dim))
  36. G.add(LeakyReLU(0.2))
  37. G.add(Dense(19))
  38. G.add(LeakyReLU(0.2))
  39. G.add(Batchnormalization())
  40. G.add(Reshape((1,input_shape = (19,)))
  41. G.add(Conv2D(1,activation=''tanh'',data_format=''channels_last''))
  42. adam = Adam(lr=self.learning_rate,beta_1=0.5)
  43. G.compile(loss=''binary_crossentropy'',metrics=[''accuracy''])
  44. return G
  45. def combined(self):
  46. """
  47. defien combined gan model
  48. """
  49. G,D = self.G,self.D
  50. D.trainable = False
  51. GD = Sequential()
  52. GD.add(G)
  53. GD.add(D)
  54. adam = Adam(lr=self.learning_rate,beta_1=0.5)
  55. GD.compile(loss=''binary_crossentropy'',metrics=[''accuracy''])
  56. D.trainable = True
  57. return GD

最后是操作代码:

  1. class Model:
  2. def __init__(self,epochs,z_input_dim,n_iter_D,n_iter_G):
  3. self.epochs = epochs
  4. self.batch_size = batch_size
  5. self.learning_rate = learning_rate
  6. self.z_input_dim = z_input_dim
  7. self.data = Data(data,self.batch_size,self.z_input_dim)
  8. # the reason why D,G differ in iter : Generator needs more training than discriminator
  9. self.n_iter_D = n_iter_D
  10. self.n_iter_G = n_iter_G
  11. self.gan = GAN(self.learning_rate,self.z_input_dim)
  12. # print status
  13. batch_count = self.data.x_data.shape[0] / self.batch_size
  14. print(''Epochs:'',self.epochs)
  15. print(''Batch size:'',self.batch_size)
  16. print(''Batches per epoch:'',batch_count)
  17. print(''Learning rate:'',self.learning_rate)
  18. print(''Image data format:'',K.image_data_format())
  19. def fit(self):
  20. self.d_loss = []
  21. self.g_loss = []
  22. for epoch in range(self.epochs):
  23. # train discriminator by real data
  24. dloss = 0
  25. for iter in range(self.n_iter_D):
  26. dloss = self.train_D()
  27. # train GD by generated fake data
  28. gloss = 0
  29. for iter in range(self.n_iter_G):
  30. gloss = self.train_G()
  31. # save loss data
  32. self.d_loss.append(dloss)
  33. self.g_loss.append(gloss)
  34. # plot and save model each 20n epoch
  35. if epoch % 20 == 0:
  36. print("Epoch: {},discriminator loss: {},Generator loss: {}".format(str(epoch),str(dloss),str(gloss)))
  37. # show loss after train
  38. self.plot_loss_graph(self.g_loss,self.d_loss)
  39. def train_D(self):
  40. """
  41. train discriminator
  42. """
  43. # Real data
  44. real = self.data.get_real_sample()
  45. # Generated data
  46. z = self.data.get_z_sample(self.batch_size)
  47. generated_images = self.gan.G.predict(z)
  48. print(generated_images.shape)
  49. print(generated_images.dtype)
  50. # labeling and concat generated,real images
  51. x = np.concatenate((real,generated_images),axis=0)
  52. y = [0.9] * self.batch_size + [0] * self.batch_size
  53. # train discriminator
  54. self.gan.D.trainable = True
  55. loss = self.gan.D.train_on_batch(x,y)
  56. return loss
  57. def train_G(self):
  58. """
  59. train Generator
  60. """
  61. # Generated data
  62. z = self.data.get_z_sample(self.batch_size)
  63. # labeling
  64. y = [1] * self.batch_size
  65. # train generator
  66. self.gan.D.trainable = False
  67. loss = self.gan.GD.train_on_batch(z,y)
  68. return loss
  69. def plot_loss_graph(self,g_loss,d_loss):
  70. """
  71. Save training loss graph
  72. """
  73. # show loss graph
  74. plt.figure(figsize=(10,8))
  75. plt.plot(d_loss,label=''discriminator loss'')
  76. plt.plot(g_loss,label=''Generator loss'')
  77. plt.xlabel(''Epoch'')
  78. plt.ylabel(''Loss'')
  79. plt.legend()
  80. plt.show()

当我运行下面的代码时:

  1. batch_size = 50
  2. epochs = 100
  3. learning_rate = 0.001
  4. z_input_dim = 100
  5. n_iter_D = 1
  6. n_iter_G = 5
  7. # run model
  8. model = Model(data_dummise,n_iter_G)
  9. model.fit()

我遇到了以下错误消息:

----------------------------------------------- ---------------------------- InvalidArgumentError 回溯(最近一次调用 最后)在 8#运行模型 9 模型 = 模型(data_dummise、batch_size、epochs、learning_rate、z_input_dim、n_iter_D、n_iter_G) ---> 10 model.fit()

合身(自我) 33 光泽 = 0 34 对于范围内的迭代(self.n_iter_G): ---> 35 光泽 = self.train_G() 36 37 # 保存丢失数据

in train_G(self) 82#火车发电机 83 self.gan.D.trainable = False ---> 84 损失 = self.gan.GD.train_on_batch(z,y) 85回波损耗 86

~/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py 在 train_on_batch(self,x,y,sample_weight,class_weight,第 1173 章 self._update_sample_weight_modes(sample_weights=sample_weights)
1174 self._make_train_function() -> 1175 输出 = self.train_function(ins) # pylint: disable=not-callable 1176 1177 if reset_metrics:

~/.local/lib/python3.6/site-packages/tensorflow/python/keras/backend.py 在调用(自我,输入)3290 3291 fetched = self._callable_fn(*array_vals,-> 3292 run_Metadata=self.run_Metadata) 3293
self._call_fetch_callbacks(fetched[-len(self._fetches):]) 3294
output_structure = nest.pack_sequence_as(

~/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py 在 调用(self,*args,**kwargs) 1456 ret = tf_session.TF_Sessionruncallable(self._session._session,1457
self._handle,args,-> 1458 run_Metadata_ptr) 1459 如果 run_Metadata: 1460
proto_data = tf_session.TF_GetBuffer(run_Metadata_ptr)

InvalidArgumentError:您必须为占位符张量提供一个值 ''conv2d_17_input'' 带有 dtype 浮点数和形状 [?,1] [[{{node conv2d_17_input}}]]

关于InvalidArgumentError:无法将 dtype 资源的张量转换为 NumPy 数组无法将位于的资源添加到web应用程序的缓存中的问题就给大家分享到这里,感谢你花时间阅读本站内容,更多关于cut 函数:无法根据规则“安全”将数组数据从 dtype('float64') 转换为 dtype('等相关知识的信息别忘了在本站进行查找喔。

本文标签: