在这篇文章中,我们将带领您了解InvalidArgumentError:无法将dtype资源的张量转换为NumPy数组的全貌,包括无法将位于的资源添加到web应用程序的缓存中的相关情况。同时,我们还将
在这篇文章中,我们将带领您了解InvalidArgumentError:无法将 dtype 资源的张量转换为 NumPy 数组的全貌,包括无法将位于的资源添加到web应用程序的缓存中的相关情况。同时,我们还将为您介绍有关cut 函数:无法根据规则“安全”将数组数据从 dtype('float64') 转换为 dtype('
- InvalidArgumentError:无法将 dtype 资源的张量转换为 NumPy 数组(无法将位于的资源添加到web应用程序的缓存中)
- cut 函数:无法根据规则“安全”将数组数据从 dtype('float64') 转换为 dtype('
- InvalidArgumentError: 预期 'tf.Tensor(False, shape=(), dtype=bool)' 为真
- InvalidArgumentError: 预期 'tf.Tensor(False, shape=(), dtype=bool)' 为真汇总数据:b'没有文件匹配模式:
- InvalidArgumentError:您必须使用 dtype float 和 shape [?,1,19,1] Keras 为占位符张量“conv2d_17_input”提供一个值
InvalidArgumentError:无法将 dtype 资源的张量转换为 NumPy 数组(无法将位于的资源添加到web应用程序的缓存中)
如何解决InvalidArgumentError:无法将 dtype 资源的张量转换为 NumPy 数组
我想使用 tf-hub 构建文本分类模型并导出为 tflite 模型但是
在转换包括 tf hub 的 tensorflow 模型时,我遇到了错误。请帮我解决。
import tensorflow as tf
import tensorflow_hub as hub
model = tf.keras.Sequential()
model.add(tf.keras.layers.InputLayer(dtype=tf.string,input_shape=()))
model.add(hub.KerasLayer("https://tfhub.dev/google/tf2-preview/nnlm-en-dim50/1"))
converter=tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
我尝试了 tf-lite python 和命令行 api。但是我遇到了 InvalidArgumentError。
InvalidArgumentError Traceback (most recent call last)
<ipython-input-15-5a8dbd778645> in <module>()
5 model.add(hub.KerasLayer("https://tfhub.dev/google/tf2-preview/nnlm-en-dim50/1"))
6 converter = tf.lite.TFLiteConverter.from_keras_model(model)
----> 7 tflite_model = converter.convert()
6 frames
/usr/local/lib/python3.6/dist-packages/tensorflow/lite/python/lite.py in convert(self)
850 frozen_func,graph_def = (
851 _convert_to_constants.convert_variables_to_constants_v2_as_graph(
--> 852 self._funcs[0],lower_control_flow=False))
853
854 input_tensors = [
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/convert_to_constants.py in convert_variables_to_constants_v2_as_graph(func,lower_control_flow,aggressive_inlining)
1103 func=func,1104 lower_control_flow=lower_control_flow,-> 1105 aggressive_inlining=aggressive_inlining)
1106
1107 output_graph_def,converted_input_indices = _replace_variables_by_constants(
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/convert_to_constants.py in __init__(self,func,aggressive_inlining,variable_names_allowlist,variable_names_denylist)
804 variable_names_allowlist=variable_names_allowlist,805 variable_names_denylist=variable_names_denylist)
--> 806 self._build_tensor_data()
807
808 def _build_tensor_data(self):
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/convert_to_constants.py in _build_tensor_data(self)
823 data = map_index_to_variable[idx].numpy()
824 else:
--> 825 data = val_tensor.numpy()
826 self._tensor_data[tensor_name] = _TensorData(
827 numpy=data,/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in numpy(self)
1069 """
1070 # Todo(slebedev): Consider avoiding a copy for non-cpu or remote tensors.
-> 1071 maybe_arr = self._numpy() # pylint: disable=protected-access
1072 return maybe_arr.copy() if isinstance(maybe_arr,np.ndarray) else maybe_arr
1073
/usr/local/lib/python3.6/dist-packages/tensorflow/python/framework/ops.py in _numpy(self)
1037 return self._numpy_internal()
1038 except core._NotOkStatusException as e: # pylint: disable=protected-access
-> 1039 six.raise_from(core._status_to_exception(e.code,e.message),None) # pylint: disable=protected-access
1040
1041 @property
/usr/local/lib/python3.6/dist-packages/six.py in raise_from(value,from_value)
InvalidArgumentError: Cannot convert a Tensor of dtype resource to a NumPy array.
解决方法
上次我检查时,TFLite 不支持查找表,这是 TF Hub 模型中资源张量的主要来源(除了变量,但这些肯定有效)。
cut 函数:无法根据规则“安全”将数组数据从 dtype('float64') 转换为 dtype('如何解决cut 函数:无法根据规则“安全”将数组数据从 dtype(''float64'') 转换为 dtype(''<U32'')
我想将 Dataframe 中某一列的内容更改为“好”或“坏”。 该列填充了从 1 到 10 的数字。 1-5 是坏的,6-10 是好的。 为此,我想使用 cut 方法。
bins = (1,5.5,10)
rating = [''bad'',''good'']
game[''useropinion''] = pd.cut(rating,bins)
运行后的结果:
Cannot cast array data from dtype(''float64'') to dtype(''<U32'') according to the rule ''safe''
怎么了?我该如何解决?
解决方法
你可以这样做:
game[''useropinion''] = pd.cut(game[''useropinion''],bins,labels=rating)
编辑: 为了回答问题,您试图削减评级而不是用户意见数据,因此很自然地会得到 TypeError,因为评级是一个字符串数组,而您的 bin 是数字。 ''

InvalidArgumentError: 预期 'tf.Tensor(False, shape=(), dtype=bool)' 为真
如何解决InvalidArgumentError: 预期 ''tf.Tensor(False, shape=(), dtype=bool)'' 为真
在使用结构相似性指数比较图像之前,我使用 PCA 来减少图像的尺寸。使用PCA后,tf.image.ssim报错。
我在这里比较图像而不使用 PCA。这完美地工作 -
import numpy as np
import tensorflow as tf
import time
(x_train,y_train),(x_test,y_test) = tf.keras.datasets.mnist.load_data(
path=''mnist.npz''
)
start = time.time()
for i in range(1,6000):
x_train_zero = np.expand_dims(x_train[0],axis=2)
x_train_expanded = np.expand_dims(x_train[i],axis=2)
print(tf.image.ssim(x_train_zero,x_train_expanded,255))
print(time.time()-start)
我在这里应用了 PCA 来减少图像的尺寸,以便 SSIM 花费更少的时间来比较图像 -
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
x_train = x_train.reshape(60000,-1)
scaler = StandardScaler()
X_scaled = scaler.fit_transform(x_train)
pca = PCA()
pca = PCA(n_components = 11)
X_pca = pca.fit_transform(X_scaled).reshape(60000,11,1)
start = time.time()
for i in range(1,6000):
X_pca_zero = np.expand_dims(X_pca[0],axis=2)
X_pca_expanded = np.expand_dims(X_pca[i],axis=2)
print(tf.image.ssim(X_pca_zero,X_pca_expanded,255))
print(time.time()-start)
这段代码抛出错误 - InvalidArgumentError: Expected ''tf.Tensor(False,shape=(),dtype=bool)'' to be true。汇总数据:11,1,1
11
解决方法
所以,简而言之,发生该错误是因为在 cancel
中,输入 tf.image.ssim
和 X_pca_zero
的大小与 X_pca_expanded
不匹配,如果您有 {{1} } 那么 filter_size
和 filter_size=11
必须至少为 11x11,您可以如何更改代码的示例:
X_pca_zero

InvalidArgumentError: 预期 'tf.Tensor(False, shape=(), dtype=bool)' 为真汇总数据:b'没有文件匹配模式:
如何解决InvalidArgumentError: 预期 ''tf.Tensor(False, shape=(), dtype=bool)'' 为真汇总数据:b''没有文件匹配模式:
当我在 Google colab 上运行以下代码时,
table(selection)
我收到以下错误:
tf.data.Dataset.list_files(''/content/gdrive/MyDrive/Experiment/train/*.jpg'')
过去两周我一直被这个问题困扰,请帮帮我。另外,在运行上述代码行之前,我已经成功安装了 Google 驱动器。
解决方法

这是我正在使用的示例。
from google.colab import drive
drive.mount(''/content/drive'')
def load_image(filepath):
raw_img = tf.io.read_file(filepath)
img_tensor_int = tf.image.decode_jpeg(raw_img,channels=3)
img_tensor_flt = tf.image.convert_image_dtype(img_tensor_int,tf.float32)
return img_tensor_flt,img_tensor_flt
def load_dataset(split):
print(''/content/drive/MyDrive/CelebAsubset/''+split+''/*.jpg'')
train_list_ds = tf.data.Dataset.list_files(''/content/drive/MyDrive/CelebAsubset/''+split+''/*.jpg'',shuffle=False)
train_ds = train_list_ds.map(load_image)
return train_ds
train_ds = load_dataset(''train'')
val_ds = load_dataset(''val'')
test_ds = load_dataset(''test'')
![InvalidArgumentError:您必须使用 dtype float 和 shape [?,1,19,1] Keras 为占位符张量“conv2d_17_input”提供一个值 InvalidArgumentError:您必须使用 dtype float 和 shape [?,1,19,1] Keras 为占位符张量“conv2d_17_input”提供一个值](http://www.gvkun.com/zb_users/upload/2025/04/6ef5d8fa-b2bf-41cd-83b9-af2a91cb76411745229795680.jpg)
InvalidArgumentError:您必须使用 dtype float 和 shape [?,1,19,1] Keras 为占位符张量“conv2d_17_input”提供一个值
如何解决InvalidArgumentError:您必须使用 dtype float 和 shape [?,1,19,1] Keras 为占位符张量“conv2d_17_input”提供一个值
我正在尝试使用 Keras 构建 GAN 模型,但遇到了一些错误消息的问题,InvalidArgumentError: You must Feed a value for placeholder tensor ''conv2d_17_input'' with dtype float and shape [?,1,19,1 ].
数据来自真实数据,并试图用它解决不平衡问题。这就是数据的形状与 32x32 等普通图像不同的原因。
如果有人知道这里发生了什么,请帮助我。真的很感激。
代码如下:
class Data:
"""
Define dataset for training GAN
"""
def __init__(self,data,batch_size,z_input_dim):
X_train,y_train,X_test,y_test = train_test_split(data.iloc[:,:-1].values,data.result.values,test_size=0.3)
self.x_data = X_train
self.x_data = self.x_data.reshape((self.x_data.shape[0],1) + (self.x_data.shape[1],1))
self.batch_size = batch_size
self.z_input_dim = z_input_dim
def get_real_sample(self):
"""
get real sample mnist images
:return: batch_size number of mnist image data
"""
return self.x_data[np.random.randint(0,self.x_data.shape[0],size=self.batch_size)]
def get_z_sample(self,sample_size):
"""
get z sample data
:return: random z data (batch_size,z_input_dim) size
"""
return np.random.uniform(-1.0,1.0,(sample_size,self.z_input_dim))
和 GAN 模型:
class GAN:
def __init__(self,learning_rate,z_input_dim):
"""
init params
:param learning_rate: learning rate of optimizer
:param z_input_dim: input dim of z
"""
self.learning_rate = learning_rate
self.z_input_dim = z_input_dim
self.D = self.discriminator()
self.G = self.generator()
self.GD = self.combined()
def discriminator(self):
"""
define discriminator
"""
D = Sequential()
D.add(Conv2D(128,(1,3),input_shape=(1,1),kernel_initializer=initializers.Randomnormal(stddev=0.02),data_format=''channels_last''))
D.add(LeakyReLU(0.2))
D.add(MaxPooling2D(pool_size=(1,2),strides=2,data_format=''channels_last''))
D.add(Conv2D(256,padding=''same'',data_format=''channels_last''))
D.add(Flatten())
D.add(Dense(128))
D.add(LeakyReLU(0.2))
D.add(Dropout(0.3))
D.add(Dense(1,activation=''sigmoid''))
adam = Adam(lr=self.learning_rate,beta_1=0.5)
D.compile(loss=''binary_crossentropy'',optimizer=adam,metrics=[''accuracy''])
return D
def generator(self):
"""
define generator
"""
G = Sequential()
G.add(Dense(256,input_dim=self.z_input_dim))
G.add(LeakyReLU(0.2))
G.add(Dense(19))
G.add(LeakyReLU(0.2))
G.add(Batchnormalization())
G.add(Reshape((1,input_shape = (19,)))
G.add(Conv2D(1,activation=''tanh'',data_format=''channels_last''))
adam = Adam(lr=self.learning_rate,beta_1=0.5)
G.compile(loss=''binary_crossentropy'',metrics=[''accuracy''])
return G
def combined(self):
"""
defien combined gan model
"""
G,D = self.G,self.D
D.trainable = False
GD = Sequential()
GD.add(G)
GD.add(D)
adam = Adam(lr=self.learning_rate,beta_1=0.5)
GD.compile(loss=''binary_crossentropy'',metrics=[''accuracy''])
D.trainable = True
return GD
最后是操作代码:
class Model:
def __init__(self,epochs,z_input_dim,n_iter_D,n_iter_G):
self.epochs = epochs
self.batch_size = batch_size
self.learning_rate = learning_rate
self.z_input_dim = z_input_dim
self.data = Data(data,self.batch_size,self.z_input_dim)
# the reason why D,G differ in iter : Generator needs more training than discriminator
self.n_iter_D = n_iter_D
self.n_iter_G = n_iter_G
self.gan = GAN(self.learning_rate,self.z_input_dim)
# print status
batch_count = self.data.x_data.shape[0] / self.batch_size
print(''Epochs:'',self.epochs)
print(''Batch size:'',self.batch_size)
print(''Batches per epoch:'',batch_count)
print(''Learning rate:'',self.learning_rate)
print(''Image data format:'',K.image_data_format())
def fit(self):
self.d_loss = []
self.g_loss = []
for epoch in range(self.epochs):
# train discriminator by real data
dloss = 0
for iter in range(self.n_iter_D):
dloss = self.train_D()
# train GD by generated fake data
gloss = 0
for iter in range(self.n_iter_G):
gloss = self.train_G()
# save loss data
self.d_loss.append(dloss)
self.g_loss.append(gloss)
# plot and save model each 20n epoch
if epoch % 20 == 0:
print("Epoch: {},discriminator loss: {},Generator loss: {}".format(str(epoch),str(dloss),str(gloss)))
# show loss after train
self.plot_loss_graph(self.g_loss,self.d_loss)
def train_D(self):
"""
train discriminator
"""
# Real data
real = self.data.get_real_sample()
# Generated data
z = self.data.get_z_sample(self.batch_size)
generated_images = self.gan.G.predict(z)
print(generated_images.shape)
print(generated_images.dtype)
# labeling and concat generated,real images
x = np.concatenate((real,generated_images),axis=0)
y = [0.9] * self.batch_size + [0] * self.batch_size
# train discriminator
self.gan.D.trainable = True
loss = self.gan.D.train_on_batch(x,y)
return loss
def train_G(self):
"""
train Generator
"""
# Generated data
z = self.data.get_z_sample(self.batch_size)
# labeling
y = [1] * self.batch_size
# train generator
self.gan.D.trainable = False
loss = self.gan.GD.train_on_batch(z,y)
return loss
def plot_loss_graph(self,g_loss,d_loss):
"""
Save training loss graph
"""
# show loss graph
plt.figure(figsize=(10,8))
plt.plot(d_loss,label=''discriminator loss'')
plt.plot(g_loss,label=''Generator loss'')
plt.xlabel(''Epoch'')
plt.ylabel(''Loss'')
plt.legend()
plt.show()
当我运行下面的代码时:
batch_size = 50
epochs = 100
learning_rate = 0.001
z_input_dim = 100
n_iter_D = 1
n_iter_G = 5
# run model
model = Model(data_dummise,n_iter_G)
model.fit()
我遇到了以下错误消息:
----------------------------------------------- ---------------------------- InvalidArgumentError 回溯(最近一次调用
最后)在
8#运行模型
9 模型 = 模型(data_dummise、batch_size、epochs、learning_rate、z_input_dim、n_iter_D、n_iter_G)
---> 10 model.fit()
合身(自我)
33 光泽 = 0
34 对于范围内的迭代(self.n_iter_G):
---> 35 光泽 = self.train_G()
36
37 # 保存丢失数据
in train_G(self)
82#火车发电机
83 self.gan.D.trainable = False
---> 84 损失 = self.gan.GD.train_on_batch(z,y)
85回波损耗
86
~/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py
在 train_on_batch(self,x,y,sample_weight,class_weight,第 1173 章
self._update_sample_weight_modes(sample_weights=sample_weights)
1174 self._make_train_function()
-> 1175 输出 = self.train_function(ins) # pylint: disable=not-callable 1176 1177 if reset_metrics:
~/.local/lib/python3.6/site-packages/tensorflow/python/keras/backend.py
在调用(自我,输入)3290 3291 fetched =
self._callable_fn(*array_vals,-> 3292 run_Metadata=self.run_Metadata) 3293
self._call_fetch_callbacks(fetched[-len(self._fetches):]) 3294
output_structure = nest.pack_sequence_as(
~/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py
在 调用(self,*args,**kwargs) 1456 ret =
tf_session.TF_Sessionruncallable(self._session._session,1457
self._handle,args,-> 1458 run_Metadata_ptr) 1459 如果 run_Metadata: 1460
proto_data = tf_session.TF_GetBuffer(run_Metadata_ptr)
InvalidArgumentError:您必须为占位符张量提供一个值
''conv2d_17_input'' 带有 dtype 浮点数和形状 [?,1] [[{{node
conv2d_17_input}}]]
关于InvalidArgumentError:无法将 dtype 资源的张量转换为 NumPy 数组和无法将位于的资源添加到web应用程序的缓存中的问题就给大家分享到这里,感谢你花时间阅读本站内容,更多关于cut 函数:无法根据规则“安全”将数组数据从 dtype('float64') 转换为 dtype('等相关知识的信息别忘了在本站进行查找喔。
本文标签:
如何解决cut 函数:无法根据规则“安全”将数组数据从 dtype(''float64'') 转换为 dtype(''<U32'')
我想将 Dataframe 中某一列的内容更改为“好”或“坏”。 该列填充了从 1 到 10 的数字。 1-5 是坏的,6-10 是好的。 为此,我想使用 cut 方法。
bins = (1,5.5,10)
rating = [''bad'',''good'']
game[''useropinion''] = pd.cut(rating,bins)
运行后的结果:
Cannot cast array data from dtype(''float64'') to dtype(''<U32'') according to the rule ''safe''
怎么了?我该如何解决?
解决方法
你可以这样做:
game[''useropinion''] = pd.cut(game[''useropinion''],bins,labels=rating)
编辑: 为了回答问题,您试图削减评级而不是用户意见数据,因此很自然地会得到 TypeError,因为评级是一个字符串数组,而您的 bin 是数字。 ''
InvalidArgumentError: 预期 'tf.Tensor(False, shape=(), dtype=bool)' 为真
如何解决InvalidArgumentError: 预期 ''tf.Tensor(False, shape=(), dtype=bool)'' 为真
在使用结构相似性指数比较图像之前,我使用 PCA 来减少图像的尺寸。使用PCA后,tf.image.ssim报错。
我在这里比较图像而不使用 PCA。这完美地工作 -
import numpy as np
import tensorflow as tf
import time
(x_train,y_train),(x_test,y_test) = tf.keras.datasets.mnist.load_data(
path=''mnist.npz''
)
start = time.time()
for i in range(1,6000):
x_train_zero = np.expand_dims(x_train[0],axis=2)
x_train_expanded = np.expand_dims(x_train[i],axis=2)
print(tf.image.ssim(x_train_zero,x_train_expanded,255))
print(time.time()-start)
我在这里应用了 PCA 来减少图像的尺寸,以便 SSIM 花费更少的时间来比较图像 -
from sklearn.preprocessing import StandardScaler
from sklearn.decomposition import PCA
x_train = x_train.reshape(60000,-1)
scaler = StandardScaler()
X_scaled = scaler.fit_transform(x_train)
pca = PCA()
pca = PCA(n_components = 11)
X_pca = pca.fit_transform(X_scaled).reshape(60000,11,1)
start = time.time()
for i in range(1,6000):
X_pca_zero = np.expand_dims(X_pca[0],axis=2)
X_pca_expanded = np.expand_dims(X_pca[i],axis=2)
print(tf.image.ssim(X_pca_zero,X_pca_expanded,255))
print(time.time()-start)
这段代码抛出错误 - InvalidArgumentError: Expected ''tf.Tensor(False,shape=(),dtype=bool)'' to be true。汇总数据:11,1,1 11
解决方法
所以,简而言之,发生该错误是因为在 cancel
中,输入 tf.image.ssim
和 X_pca_zero
的大小与 X_pca_expanded
不匹配,如果您有 {{1} } 那么 filter_size
和 filter_size=11
必须至少为 11x11,您可以如何更改代码的示例:
X_pca_zero
InvalidArgumentError: 预期 'tf.Tensor(False, shape=(), dtype=bool)' 为真汇总数据:b'没有文件匹配模式:
如何解决InvalidArgumentError: 预期 ''tf.Tensor(False, shape=(), dtype=bool)'' 为真汇总数据:b''没有文件匹配模式:
当我在 Google colab 上运行以下代码时,
table(selection)
我收到以下错误:
tf.data.Dataset.list_files(''/content/gdrive/MyDrive/Experiment/train/*.jpg'')
过去两周我一直被这个问题困扰,请帮帮我。另外,在运行上述代码行之前,我已经成功安装了 Google 驱动器。
解决方法
这是我正在使用的示例。
from google.colab import drive
drive.mount(''/content/drive'')
def load_image(filepath):
raw_img = tf.io.read_file(filepath)
img_tensor_int = tf.image.decode_jpeg(raw_img,channels=3)
img_tensor_flt = tf.image.convert_image_dtype(img_tensor_int,tf.float32)
return img_tensor_flt,img_tensor_flt
def load_dataset(split):
print(''/content/drive/MyDrive/CelebAsubset/''+split+''/*.jpg'')
train_list_ds = tf.data.Dataset.list_files(''/content/drive/MyDrive/CelebAsubset/''+split+''/*.jpg'',shuffle=False)
train_ds = train_list_ds.map(load_image)
return train_ds
train_ds = load_dataset(''train'')
val_ds = load_dataset(''val'')
test_ds = load_dataset(''test'')
InvalidArgumentError:您必须使用 dtype float 和 shape [?,1,19,1] Keras 为占位符张量“conv2d_17_input”提供一个值
如何解决InvalidArgumentError:您必须使用 dtype float 和 shape [?,1,19,1] Keras 为占位符张量“conv2d_17_input”提供一个值
我正在尝试使用 Keras 构建 GAN 模型,但遇到了一些错误消息的问题,InvalidArgumentError: You must Feed a value for placeholder tensor ''conv2d_17_input'' with dtype float and shape [?,1,19,1 ].
数据来自真实数据,并试图用它解决不平衡问题。这就是数据的形状与 32x32 等普通图像不同的原因。
如果有人知道这里发生了什么,请帮助我。真的很感激。
代码如下:
class Data:
"""
Define dataset for training GAN
"""
def __init__(self,data,batch_size,z_input_dim):
X_train,y_train,X_test,y_test = train_test_split(data.iloc[:,:-1].values,data.result.values,test_size=0.3)
self.x_data = X_train
self.x_data = self.x_data.reshape((self.x_data.shape[0],1) + (self.x_data.shape[1],1))
self.batch_size = batch_size
self.z_input_dim = z_input_dim
def get_real_sample(self):
"""
get real sample mnist images
:return: batch_size number of mnist image data
"""
return self.x_data[np.random.randint(0,self.x_data.shape[0],size=self.batch_size)]
def get_z_sample(self,sample_size):
"""
get z sample data
:return: random z data (batch_size,z_input_dim) size
"""
return np.random.uniform(-1.0,1.0,(sample_size,self.z_input_dim))
和 GAN 模型:
class GAN:
def __init__(self,learning_rate,z_input_dim):
"""
init params
:param learning_rate: learning rate of optimizer
:param z_input_dim: input dim of z
"""
self.learning_rate = learning_rate
self.z_input_dim = z_input_dim
self.D = self.discriminator()
self.G = self.generator()
self.GD = self.combined()
def discriminator(self):
"""
define discriminator
"""
D = Sequential()
D.add(Conv2D(128,(1,3),input_shape=(1,1),kernel_initializer=initializers.Randomnormal(stddev=0.02),data_format=''channels_last''))
D.add(LeakyReLU(0.2))
D.add(MaxPooling2D(pool_size=(1,2),strides=2,data_format=''channels_last''))
D.add(Conv2D(256,padding=''same'',data_format=''channels_last''))
D.add(Flatten())
D.add(Dense(128))
D.add(LeakyReLU(0.2))
D.add(Dropout(0.3))
D.add(Dense(1,activation=''sigmoid''))
adam = Adam(lr=self.learning_rate,beta_1=0.5)
D.compile(loss=''binary_crossentropy'',optimizer=adam,metrics=[''accuracy''])
return D
def generator(self):
"""
define generator
"""
G = Sequential()
G.add(Dense(256,input_dim=self.z_input_dim))
G.add(LeakyReLU(0.2))
G.add(Dense(19))
G.add(LeakyReLU(0.2))
G.add(Batchnormalization())
G.add(Reshape((1,input_shape = (19,)))
G.add(Conv2D(1,activation=''tanh'',data_format=''channels_last''))
adam = Adam(lr=self.learning_rate,beta_1=0.5)
G.compile(loss=''binary_crossentropy'',metrics=[''accuracy''])
return G
def combined(self):
"""
defien combined gan model
"""
G,D = self.G,self.D
D.trainable = False
GD = Sequential()
GD.add(G)
GD.add(D)
adam = Adam(lr=self.learning_rate,beta_1=0.5)
GD.compile(loss=''binary_crossentropy'',metrics=[''accuracy''])
D.trainable = True
return GD
最后是操作代码:
class Model:
def __init__(self,epochs,z_input_dim,n_iter_D,n_iter_G):
self.epochs = epochs
self.batch_size = batch_size
self.learning_rate = learning_rate
self.z_input_dim = z_input_dim
self.data = Data(data,self.batch_size,self.z_input_dim)
# the reason why D,G differ in iter : Generator needs more training than discriminator
self.n_iter_D = n_iter_D
self.n_iter_G = n_iter_G
self.gan = GAN(self.learning_rate,self.z_input_dim)
# print status
batch_count = self.data.x_data.shape[0] / self.batch_size
print(''Epochs:'',self.epochs)
print(''Batch size:'',self.batch_size)
print(''Batches per epoch:'',batch_count)
print(''Learning rate:'',self.learning_rate)
print(''Image data format:'',K.image_data_format())
def fit(self):
self.d_loss = []
self.g_loss = []
for epoch in range(self.epochs):
# train discriminator by real data
dloss = 0
for iter in range(self.n_iter_D):
dloss = self.train_D()
# train GD by generated fake data
gloss = 0
for iter in range(self.n_iter_G):
gloss = self.train_G()
# save loss data
self.d_loss.append(dloss)
self.g_loss.append(gloss)
# plot and save model each 20n epoch
if epoch % 20 == 0:
print("Epoch: {},discriminator loss: {},Generator loss: {}".format(str(epoch),str(dloss),str(gloss)))
# show loss after train
self.plot_loss_graph(self.g_loss,self.d_loss)
def train_D(self):
"""
train discriminator
"""
# Real data
real = self.data.get_real_sample()
# Generated data
z = self.data.get_z_sample(self.batch_size)
generated_images = self.gan.G.predict(z)
print(generated_images.shape)
print(generated_images.dtype)
# labeling and concat generated,real images
x = np.concatenate((real,generated_images),axis=0)
y = [0.9] * self.batch_size + [0] * self.batch_size
# train discriminator
self.gan.D.trainable = True
loss = self.gan.D.train_on_batch(x,y)
return loss
def train_G(self):
"""
train Generator
"""
# Generated data
z = self.data.get_z_sample(self.batch_size)
# labeling
y = [1] * self.batch_size
# train generator
self.gan.D.trainable = False
loss = self.gan.GD.train_on_batch(z,y)
return loss
def plot_loss_graph(self,g_loss,d_loss):
"""
Save training loss graph
"""
# show loss graph
plt.figure(figsize=(10,8))
plt.plot(d_loss,label=''discriminator loss'')
plt.plot(g_loss,label=''Generator loss'')
plt.xlabel(''Epoch'')
plt.ylabel(''Loss'')
plt.legend()
plt.show()
当我运行下面的代码时:
batch_size = 50
epochs = 100
learning_rate = 0.001
z_input_dim = 100
n_iter_D = 1
n_iter_G = 5
# run model
model = Model(data_dummise,n_iter_G)
model.fit()
我遇到了以下错误消息:
----------------------------------------------- ---------------------------- InvalidArgumentError 回溯(最近一次调用 最后)在 8#运行模型 9 模型 = 模型(data_dummise、batch_size、epochs、learning_rate、z_input_dim、n_iter_D、n_iter_G) ---> 10 model.fit()
合身(自我) 33 光泽 = 0 34 对于范围内的迭代(self.n_iter_G): ---> 35 光泽 = self.train_G() 36 37 # 保存丢失数据
in train_G(self) 82#火车发电机 83 self.gan.D.trainable = False ---> 84 损失 = self.gan.GD.train_on_batch(z,y) 85回波损耗 86
~/.local/lib/python3.6/site-packages/tensorflow/python/keras/engine/training.py
在 train_on_batch(self,x,y,sample_weight,class_weight,第 1173 章
self._update_sample_weight_modes(sample_weights=sample_weights)
1174 self._make_train_function()
-> 1175 输出 = self.train_function(ins) # pylint: disable=not-callable 1176 1177 if reset_metrics:
~/.local/lib/python3.6/site-packages/tensorflow/python/keras/backend.py
在调用(自我,输入)3290 3291 fetched =
self._callable_fn(*array_vals,-> 3292 run_Metadata=self.run_Metadata) 3293
self._call_fetch_callbacks(fetched[-len(self._fetches):]) 3294
output_structure = nest.pack_sequence_as(
~/.local/lib/python3.6/site-packages/tensorflow/python/client/session.py
在 调用(self,*args,**kwargs) 1456 ret =
tf_session.TF_Sessionruncallable(self._session._session,1457
self._handle,args,-> 1458 run_Metadata_ptr) 1459 如果 run_Metadata: 1460
proto_data = tf_session.TF_GetBuffer(run_Metadata_ptr)
InvalidArgumentError:您必须为占位符张量提供一个值 ''conv2d_17_input'' 带有 dtype 浮点数和形状 [?,1] [[{{node conv2d_17_input}}]]
关于InvalidArgumentError:无法将 dtype 资源的张量转换为 NumPy 数组和无法将位于的资源添加到web应用程序的缓存中的问题就给大家分享到这里,感谢你花时间阅读本站内容,更多关于cut 函数:无法根据规则“安全”将数组数据从 dtype('float64') 转换为 dtype('
本文标签: