GVKun编程网logo

Python numpy 模块-matmul() 实例源码(python numpy.mat)

1

最近很多小伙伴都在问Pythonnumpy模块-matmul()实例源码和pythonnumpy.mat这两个问题,那么本篇文章就来给大家详细解答一下,同时本文还将给你拓展einsum和matmul、

最近很多小伙伴都在问Python numpy 模块-matmul() 实例源码python numpy.mat这两个问题,那么本篇文章就来给大家详细解答一下,同时本文还将给你拓展einsum 和 matmul、InvalidArgumentError:无法计算MatMul,因为输入#0(从零开始)应为浮点张量,但为双张量[Op:MatMul]、matMul 中的错误:形状为 684,1 和 2,1 且 transposeA=false 和 transposeB=false 的张量的内部形状 (1) 和 (2) 必须匹配、matmul:输入操作数 1 不匹配等相关知识,下面开始了哦!

本文目录一览:

Python numpy 模块-matmul() 实例源码(python numpy.mat)

Python numpy 模块-matmul() 实例源码(python numpy.mat)

Python numpy 模块,matmul() 实例源码

我们从Python开源项目中,提取了以下50个代码示例,用于说明如何使用numpy.matmul()

项目:Stein-Variational-gradient-descent    作者:DartML    | 项目源码 | 文件源码
  1. def svgd_kernel(self, h = -1):
  2. sq_dist = pdist(self.theta)
  3. pairwise_dists = squareform(sq_dist)**2
  4. if h < 0: # if h < 0,using median trick
  5. h = np.median(pairwise_dists)
  6. h = np.sqrt(0.5 * h / np.log(self.theta.shape[0]+1))
  7.  
  8. # compute the rbf kernel
  9.  
  10. Kxy = np.exp( -pairwise_dists / h**2 / 2)
  11.  
  12. dxkxy = -np.matmul(Kxy, self.theta)
  13. sumkxy = np.sum(Kxy, axis=1)
  14. for i in range(self.theta.shape[1]):
  15. dxkxy[:, i] = dxkxy[:,i] + np.multiply(self.theta[:,i],sumkxy)
  16. dxkxy = dxkxy / (h**2)
  17. return (Kxy, dxkxy)
项目:Lattice-Based-Signatures    作者:krishnacharya    | 项目源码 | 文件源码
  1. def KeyGen(**kwargs):
  2. ''''''
  3. Appendix B of BLISS paper
  4. m_bar = m + n
  5.  
  6. o/p:
  7. A: Public Key n x m'' numpy array
  8. S: Secret Key m''x n numpy array
  9. ''''''
  10. q, n, m, alpha = kwargs[''q''], kwargs[''n''], kwargs[''m''], kwargs[''alpha'']
  11. Aq_bar = util.crypt_secure_matrix(-(q-1)/2, (q-1)/2, m)
  12. S_bar = util.crypt_secure_matrix(-(2)**alpha, (2)**alpha, n) # alpha is small enough,we need not reduce (modq)
  13. S = np.vstack((S_bar, np.eye(n, dtype = int))) # dimension is m_bar x n,Elements are in Z mod(2q)
  14. A = np.hstack((2*Aq_bar, q * np.eye(n, dtype = int) - 2*np.matmul(Aq_bar,S_bar))) # dimension is n x m_bar,Elements are in Z mod(2q)
  15. #return util.matrix_to_Zq(A,2*q),S,Aq_bar,S_bar
  16. return util.matrix_to_Zq(A, 2*q), S
项目:Lattice-Based-Signatures    作者:krishnacharya    | 项目源码 | 文件源码
  1. def test():
  2. # Classical SIS parameters
  3. n, alpha, q = 128, 872, 1, 114356107
  4. kappa = 20
  5.  
  6. #discrete Gaussian Parameters
  7. sd = 300
  8. eta = 1.2
  9.  
  10. A, S = KeyGen(q = q,n = n,m = m,alpha = alpha)
  11. #print np.array(np.matmul(A,S) - q*np.eye(n),dtype=float)/(2*q) #to test AS = q mod(2q)
  12. z, c = Sign(msg = "Hello Bob",A = A,S = S,sd = sd,q = q,M = 3.0,kappa = kappa)
  13. print z
  14. print c
  15. print Verify(msg = "Hello Bob", A=A, m=m, n=n, sd=sd, q=q, eta=eta, z=z, c=c, kappa = kappa)
  16. print Verify(msg = "Hello Robert", kappa = kappa)
  17. print Verify(msg = "Hello Roberto", kappa = kappa)
  18. print Verify(msg = "Hola Roberto", kappa = kappa)
项目:Lattice-Based-Signatures    作者:krishnacharya    | 项目源码 | 文件源码
  1. def Verify(**kwargs):
  2. ''''''
  3. Verification for the signature
  4. i/p:
  5. msg: the string sent by the sender
  6. (z,c): vectors in Zq,the signature
  7. A : numpy array,Verification Key dimension nxm
  8. T : the matrix AS mod q,it is used in the Verification of the signature
  9. ''''''
  10. msg, z, c, A, T, sd, eta, k, q = kwargs[''msg''], kwargs[''z''], kwargs[''c''], kwargs[''A''], kwargs[''T''], kwargs[''sd''], kwargs[''eta''], kwargs[''k''], kwargs[''q'']
  11. norm_bound = eta * sd * np.sqrt(m)
  12. # checks for norm of z being small and that H(Az-Tc mod q,msg) hashes to c
  13. vec = util.vector_to_Zq(np.array(np.matmul(A,z) - np.matmul(T,c)), q)
  14. hashedList = util.hash_to_baseb(vec, msg, 3, k)
  15. print hashedList, c
  16. if np.sqrt(z.dot(z)) <= norm_bound and np.array_equal(c, hashedList):
  17. return True
  18. else:
  19. return False
项目:Lattice-Based-Signatures    作者:krishnacharya    | 项目源码 | 文件源码
  1. def KeyGen(n, d, q):
  2. ''''''
  3. input:
  4. q : polynomial size prime number
  5. n,m,k : dimensions specifiers
  6. d : SIS parameter,hardest instances are where d ~ q^(n/m)
  7.  
  8. output:
  9. Signing Key S : Matrix of dimension mxk with coefficients in [-d.d]
  10. Verification Key A : Matrix of dimension nxm with coefficients from [-(q-1)/2,(q-1)/2]
  11. T : the matrix AS,it is used in the Verification of the signature
  12.  
  13. ''''''
  14. S = crypt_secure_matrix(d, k)
  15. A = crypt_secure_matrix((q-1)/2, m)
  16. T = np.matmul(A, S)
  17. return S, T
项目:Neural_Artistic_Style    作者:everfor    | 项目源码 | 文件源码
  1. def transfer_color(content, style):
  2. import scipy.linalg as sl
  3. # Mean and covariance of content
  4. content_mean = np.mean(content, axis = (0, 1))
  5. content_diff = content - content_mean
  6. content_diff = np.reshape(content_diff, (-1, content_diff.shape[2]))
  7. content_covariance = np.matmul(content_diff.T, content_diff) / (content_diff.shape[0])
  8.  
  9. # Mean and covariance of style
  10. style_mean = np.mean(style, 1))
  11. style_diff = style - style_mean
  12. style_diff = np.reshape(style_diff, style_diff.shape[2]))
  13. style_covariance = np.matmul(style_diff.T, style_diff) / (style_diff.shape[0])
  14.  
  15. # Calculate A and b
  16. A = np.matmul(sl.sqrtm(content_covariance), sl.inv(sl.sqrtm(style_covariance)))
  17. b = content_mean - np.matmul(A, style_mean)
  18.  
  19. # Construct new style
  20. new_style = np.reshape(style, style.shape[2])).T
  21. new_style = np.matmul(A, new_style).T
  22. new_style = np.reshape(new_style, style.shape)
  23. new_style = new_style + b
  24.  
  25. return new_style
项目:Stein-Variational-gradient-descent    作者:DartML    | 项目源码 | 文件源码
  1. def svgd_kernel(self, theta, h = -1):
  2. sq_dist = pdist(theta)
  3. pairwise_dists = squareform(sq_dist)**2
  4. if h < 0: # if h < 0,using median trick
  5. h = np.median(pairwise_dists)
  6. h = np.sqrt(0.5 * h / np.log(theta.shape[0]+1))
  7.  
  8. # compute the rbf kernel
  9. Kxy = np.exp( -pairwise_dists / h**2 / 2)
  10.  
  11. dxkxy = -np.matmul(Kxy, theta)
  12. sumkxy = np.sum(Kxy, axis=1)
  13. for i in range(theta.shape[1]):
  14. dxkxy[:,i] + np.multiply(theta[:, dxkxy)
项目:pyRSSs    作者:butala    | 项目源码 | 文件源码
  1. def run_random_sim(sim, L):
  2. """
  3. Run *L* simulations of the state space model specified by *sim*
  4. (see :func:`setup_random_sim`). Each simulation is added to *sim*
  5. index by an integer identifier.
  6. """
  7. sim[''L''] = L
  8. for l in range(L):
  9. sim[l] = defaultdict(list)
  10. x_i = sim[''mu''] + NP.matmul(sim[''PI_sqrt''], NP.random.randn(sim[''N'']))
  11. for i in range(sim[''I'']):
  12. sim[l][''x''].append(x_i)
  13. # measurement
  14. v_i = NP.matmul(sim[''R_sqrt''][i], NP.random.randn(sim[''M'']))
  15. sim[l][''y''].append(NP.matmul(sim[''H''][i], sim[l][''x''][i]) + v_i)
  16. # time update
  17. u_i = NP.matmul(sim[''Q_sqrt''][i], NP.random.randn(sim[''N'']))
  18. x_i = NP.matmul(sim[''F''][i], x_i) + u_i
  19. return sim
项目:pyRSSs    作者:butala    | 项目源码 | 文件源码
  1. def sqrt_kf_sim(sim):
  2. """
  3. Process each simulation trial generated with
  4. :func:`setup_random_test` with a Kalman filter and return the
  5. posterior state estimates and error covariances.
  6. """
  7. post = defaultdict(dict)
  8. for l in range(sim[''L'']):
  9. x_hat_l, P_sqrt_l = sqrt_kalman_filter(sim[l][''y''],
  10. sim[''H''],
  11. sim[''R_sqrt''],
  12. sim[''F''],
  13. sim[''Q_sqrt''],
  14. sim[''mu''],
  15. sim[''PI_sqrt''])
  16. post[l][''x_hat''] = x_hat_l
  17. if l == 0:
  18. post[''P''] = [NP.matmul(x, x.T) for x in P_sqrt_l]
  19. post[l][''error''] = []
  20. for x_i, x_hat_i in izip(sim[l][''x''], post[l][''x_hat'']):
  21. post[l][''error''].append(x_hat_i - x_i)
  22. return post
项目:pyRSSs    作者:butala    | 项目源码 | 文件源码
  1. def sqrt_kf_tu(x_hat_posterior,
  2. P_sqrt_posterior,
  3. F_i,
  4. Q_sqrt_i,
  5. z_i=None):
  6. """
  7. Square root Kalman filter time update. Given the following:
  8. - *x_hat_posterior*: posterior state estimate (N)
  9. - *P_sqrt_posterior*: posterior error covariance square root (NxN)
  10. - *F_i*: time update operator (NxN)
  11. - *Q_sqrt_i*: time update noise covariance square root (NxN)
  12. - *z_i*: optional) systematic time update input (N)
  13.  
  14. Return the tuple containing the one time step prediction of the
  15. state and the square root of the error covariance.
  16. """
  17. N, _ = F_i.shape
  18. x_hat_prior = NP.matmul(F_i, x_hat_posterior)
  19. if z_i is not None:
  20. x_hat_prior += z_i
  21. A_T = NP.block([NP.matmul(F_i, P_sqrt_posterior), Q_sqrt_i])
  22. R_T = NP.linalg.qr(A_T.T, mode=''r'')
  23. P_sqrt_prior = R_T.T[:, :N]
  24. return x_hat_prior, P_sqrt_prior
项目:Tweaker-3    作者:ChristophSchranz    | 项目源码 | 文件源码
  1. def rotate_ascii_stl(self, rotation_matrix, content, filename):
  2. """Rotate the mesh array and save as ASCII STL."""
  3. mesh = np.array(content, dtype=np.float64)
  4.  
  5. # prefix area vector,if not already done (e.g. in STL format)
  6. if len(mesh[0]) == 3:
  7. row_number = int(len(content)/3)
  8. mesh = mesh.reshape(row_number, 3)
  9.  
  10. # upgrade numpy with: "pip install numpy --upgrade"
  11. rotated_content = np.matmul(mesh, rotation_matrix)
  12.  
  13. v0 = rotated_content[:, 0, :]
  14. v1 = rotated_content[:, :]
  15. v2 = rotated_content[:, 2, :]
  16. normals = np.cross(np.subtract(v1, v0), np.subtract(v2, v0)) \\
  17. .reshape(int(len(rotated_content)), 3)
  18. rotated_content = np.hstack((normals, rotated_content))
  19.  
  20. tweaked = list("solid %s" % filename)
  21. tweaked += list(map(self.write_facett, list(rotated_content)))
  22. tweaked.append("\\nendsolid %s\\n" % filename)
  23. tweaked = "".join(tweaked)
  24.  
  25. return tweaked
项目:radar    作者:amoose136    | 项目源码 | 文件源码
  1. def test_exceptions(self):
  2. dims = [
  3. ((1,), (2,)), # mismatched vector vector
  4. ((2, # mismatched matrix vector
  5. ((2, (1, 2)), # mismatched vector matrix
  6. ((1, 2), (3, 1)), # mismatched matrix matrix
  7. ((1, ()), # vector scalar
  8. ((), (1)), # scalar vector
  9. ((1, 1), # matrix scalar
  10. ((), # scalar matrix
  11. ((2, # cannot broadcast
  12. ]
  13.  
  14. for dt, (dm1, dm2) in itertools.product(self.types, dims):
  15. a = np.ones(dm1, dtype=dt)
  16. b = np.ones(dm2, dtype=dt)
  17. assert_raises(ValueError, self.matmul, a, b)
项目:radar    作者:amoose136    | 项目源码 | 文件源码
  1. def test_shapes(self):
  2. dims = [
  3. ((1, # broadcast first argument
  4. ((2, # broadcast second argument
  5. ((2, # matrix stack sizes match
  6. ]
  7.  
  8. for dt, dtype=dt)
  9. res = self.matmul(a, b)
  10. assert_(res.shape == (2, 1))
  11.  
  12. # vector vector returns scalars.
  13. for dt in self.types:
  14. a = np.ones((2, dtype=dt)
  15. b = np.ones((2, dtype=dt)
  16. c = self.matmul(a, b)
  17. assert_(np.array(c).shape == ())
项目:radar    作者:amoose136    | 项目源码 | 文件源码
  1. def test_numpy_ufunc_override(self):
  2. # 2016-01-29: NUMPY_UFUNC_disABLED
  3. return
  4.  
  5. class A(np.ndarray):
  6. def __new__(cls, *args, **kwargs):
  7. return np.array(*args, **kwargs).view(cls)
  8.  
  9. def __numpy_ufunc__(self, ufunc, method, pos, inputs, **kwargs):
  10. return "A"
  11.  
  12. class B(np.ndarray):
  13. def __new__(cls, **kwargs):
  14. return NotImplemented
  15.  
  16. a = A([1, 2])
  17. b = B([1, 2])
  18. c = np.ones(2)
  19. assert_equal(self.matmul(a, b), "A")
  20. assert_equal(self.matmul(b, a), "A")
  21. assert_raises(TypeError, b, c)
项目:ababe    作者:unkcpz    | 项目源码 | 文件源码
  1. def get_symmetry_permutation(self):
  2. """
  3. This a object function to get the permutation group operators.
  4. Represented as a table.
  5. """
  6. sym_perm = []
  7. numbers = [i for i in range(self.num_count)]
  8. sym_mat = spglib.get_symmetry(self._spg_cell, symprec=self.symprec)
  9. ops = [(r, t) for r, t in zip(sym_mat[''rotations''],
  10. sym_mat[''translations''])]
  11. for r, t in ops:
  12. pos_new = np.transpose(np.matmul(r, self._positions.T)) + t
  13. perm = self._get_new_id_seq(pos_new, numbers)
  14. sym_perm.append(perm)
  15.  
  16. return sym_perm
项目:ababe    作者:unkcpz    | 项目源码 | 文件源码
  1. def supercell(self, scale_mat):
  2. """
  3. Get the supercell of the origin gcell
  4. scale_mat is similar as H matrix in superlattice generator
  5. """
  6. # return self.__class__(...)
  7. sarr_lat = np.matmul(scale_mat, self.lattice)
  8. # coor_conv_pos = np.matmul(self.positions,self.lattice)
  9. # o_conv_pos = np.matmul(coor_conv_pos,np.linalg.inv(scale_mat))
  10. o_conv_pos = np.matmul(self.positions, np.linalg.inv(scale_mat))
  11. o_pos = self.get_frac_from_mat(scale_mat)
  12.  
  13. l_of_positions = [i for i in map(lambda x: x+o_pos, list(o_conv_pos))]
  14. pos = np.concatenate(l_of_positions, axis=0)
  15.  
  16. n = scale_mat.diagonal().prod()
  17. numbers = np.repeat(self.numbers, n)
  18.  
  19. return self.__class__(sarr_lat, numbers)
项目:speech_feature_extractor    作者:ZhihaoDU    | 项目源码 | 文件源码
  1. def ams_extractor(x, sr, win_len, shift_len, order):
  2. from scipy.signal import hilbert
  3. envelope = np.abs(hilbert(x))
  4. for i in range(order-1):
  5. envelope = np.abs(hilbert(envelope))
  6. envelope = envelope * 1./3.
  7. frames = (len(envelope) - win_len) // shift_len
  8. hanning_window = np.hanning(win_len)
  9. ams_feature = np.zeros(shape=(15, frames))
  10. wts = cal_triangle_window(0, sr//2, 15, 15.6, 400)
  11. for i in range(frames):
  12. one_frame = x[i*shift_len:i*shift_len+win_len]
  13. one_frame = one_frame * hanning_window
  14. frame_fft = np.abs(np.fft.fft(one_frame, win_len))
  15. ams_feature[:,i] = np.matmul(wts, frame_fft)
  16. return ams_feature
项目:speech_feature_extractor    作者:ZhihaoDU    | 项目源码 | 文件源码
  1. def ams_extractor(x, order=1, decimate_coef=1./4.):
  2. from scipy.signal import hilbert
  3. envelope = np.abs(hilbert(x))
  4. for i in range(order-1):
  5. envelope = np.abs(hilbert(envelope))
  6. envelope = envelope * decimate_coef
  7. frames = (len(envelope) - win_len) // shift_len
  8. hanning_window = np.hanning(win_len)
  9. ams_feature = np.zeros(shape=(15, frame_fft)
  10. return ams_feature
项目:speech_feature_extractor    作者:ZhihaoDU    | 项目源码 | 文件源码
  1. def unkNown_feature_extractor(x, barks, inner_win, inner_shift, win_type, method_version):
  2. x_spectrum = stft_extractor(x, win_type)
  3. coef = get_fft_bark_mat(sr, 20, sr//2)
  4. bark_spect = np.matmul(coef, x_spectrum)
  5. ams = np.zeros((barks, inner_win//2+1, (bark_spect.shape[1] - inner_win)//inner_shift))
  6. for i in range(barks):
  7. channel_stft = stft_extractor(bark_spect[i, :], ''hanning'')
  8. if method_version == ''v1'':
  9. ams[i, :, :] = 20 * np.log(np.abs(channel_stft[:inner_win//2+1, :(bark_spect.shape[1] - inner_win)//inner_shift]))
  10. elif method_version == ''v2'':
  11. channel_amplitude = np.abs(channel_stft[:inner_win//2+1, :(bark_spect.shape[1] - inner_win)//inner_shift])
  12. channel_angle = np.angle(channel_stft[:inner_win//2+1, :(bark_spect.shape[1] - inner_win)//inner_shift])
  13. channel_angle = channel_angle - (np.floor(channel_angle / (2.*np.pi)) * (2.*np.pi))
  14. ams[i, :] = np.power(channel_amplitude, 1./3.) * channel_angle
  15. else:
  16. ams[i, :] = np.abs(channel_stft)
  17. return ams
项目:speech_feature_extractor    作者:ZhihaoDU    | 项目源码 | 文件源码
  1. def ams_extractor(x, :] = np.abs(channel_stft)
  2. return ams
项目:bifrost    作者:ledatelescope    | 项目源码 | 文件源码
  1. def run_test_matmul_aa_ci8_shape(self, shape, transpose=False):
  2. # **Todo: This currently never triggers the transpose path in the backend
  3. shape_complex = shape[:-1] + (shape[-1] * 2,)
  4. # Note: The xGPU-like correlation kernel does not support input values of -128 (only [-127:127])
  5. a8 = ((np.random.random(size=shape_complex) * 2 - 1) * 127).astype(np.int8)
  6. a_gold = a8.astype(np.float32).view(np.complex64)
  7. if transpose:
  8. a_gold = H(a_gold)
  9. # Note: np.matmul seems to be slow and inaccurate when there are batch dims
  10. c_gold = np.matmul(a_gold, H(a_gold))
  11. triu = np.triu_indices(shape[-2] if not transpose else shape[-1], 1)
  12. c_gold[..., triu[0], triu[1]] = 0
  13. a = a8.view(bf.DataType.ci8)
  14. a = bf.asarray(a, space=''cuda'')
  15. if transpose:
  16. a = H(a)
  17. c = bf.zeros_like(c_gold, space=''cuda'')
  18. self.linalg.matmul(1, None, c)
  19. c = c.copy(''system'')
  20. np.testing.assert_allclose(c, c_gold, RTOL, ATOL)
项目:bifrost    作者:ledatelescope    | 项目源码 | 文件源码
  1. def run_test_matmul_aa_dtype_shape(self, dtype, axes=None, conj=False):
  2. a = ((np.random.random(size=shape)) * 127).astype(dtype)
  3. if axes is None:
  4. axes = range(len(shape))
  5. aa = a.transpose(axes)
  6. if conj:
  7. aa = aa.conj()
  8. c_gold = np.matmul(aa, H(aa))
  9. triu = np.triu_indices(shape[axes[-2]], triu[1]] = 0
  10. a = bf.asarray(a, space=''cuda'')
  11. aa = a.transpose(axes)
  12. if conj:
  13. aa = aa.conj()
  14. c = bf.zeros_like(c_gold, aa, ATOL)
项目:bifrost    作者:ledatelescope    | 项目源码 | 文件源码
  1. def run_test_matmul_ab_ci8_shape(self, transpose=False):
  2. ashape_complex = shape[:-2] + (shape[-2], k * 2)
  3. bshape_complex = shape[:-2] + (k, shape[-1] * 2)
  4. a8 = (np.random.random(size=ashape_complex) * 255).astype(np.int8)
  5. b8 = (np.random.random(size=bshape_complex) * 255).astype(np.int8)
  6. a_gold = a8.astype(np.float32).view(np.complex64)
  7. b_gold = b8.astype(np.float32).view(np.complex64)
  8. if transpose:
  9. a_gold, b_gold = H(b_gold), H(a_gold)
  10. c_gold = np.matmul(a_gold, b_gold)
  11. a = a8.view(bf.DataType.ci8)
  12. b = b8.view(bf.DataType.ci8)
  13. a = bf.asarray(a, space=''cuda'')
  14. b = bf.asarray(b, space=''cuda'')
  15. if transpose:
  16. a, b = H(b), H(a)
  17. c = bf.zeros_like(c_gold, ATOL)
项目:bifrost    作者:ledatelescope    | 项目源码 | 文件源码
  1. def run_benchmark_matmul_aa_correlator_kernel(self, ntime, nstand, nchan):
  2. x_shape = (ntime, nchan, nstand*2)
  3. perm = [1,0,2]
  4. x8 = ((np.random.random(size=x_shape+(2,))*2-1)*127).astype(np.int8)
  5. x = x8.astype(np.float32).view(np.complex64).reshape(x_shape)
  6. x = x.transpose(perm)
  7. b_gold = np.matmul(H(x[:,[0],:]), x[:,:])
  8. triu = np.triu_indices(x_shape[-1], 1)
  9. b_gold[..., triu[1]] = 0
  10. x = x8.view(bf.DataType.ci8).reshape(x_shape)
  11. x = bf.asarray(x, space=''cuda'')
  12. x = x.transpose(perm)
  13. b = bf.zeros_like(b_gold, space=''cuda'')
  14. bf.device.stream_synchronize();
  15. t0 = time.time()
  16. nrep = 200
  17. for _ in xrange(nrep):
  18. self.linalg.matmul(1, x, b)
  19. bf.device.stream_synchronize();
  20. dt = time.time() - t0
  21. nflop = nrep * nchan * ntime * nstand*(nstand+1)/2 * 2*2 * 8
  22. print nstand, ''\\t'', nflop / dt / 1e9, ''GFLOP/s''
  23. print ''\\t\\t'', nrep*ntime*nchan / dt / 1e6, ''MHz''
项目:image-text-matching    作者:llltttppp    | 项目源码 | 文件源码
  1. def select_negtive(self, i_feat, s_feat, sess, topN=50):
  2. ''''''
  3. Select the triplets with the largest losses \\n
  4. return i_feat_pos,s_feat_pos,i_feat_neg,s_feat_neg
  5. ''''''
  6. Feed_dict = {self.image_feat: i_feat, self.sentence_feat:s_feat}
  7. i_embed, s_embed = sess.run([self.image_fc2, self.sentence_fc2], Feed_dict=Feed_dict)
  8. S = np.matmul(i_embed, s_embed.T)
  9. i_feat_pos = i_feat.repeat(topN, axis=0)
  10. s_feat_pos = s_feat.repeat(topN, axis=0)
  11. N = S.shape[0]
  12. np.fill_diagonal(S, -2*np.ones(N))
  13. neg_s_idx = S.argsort(axis=1)[:, -topN:]
  14. neg_i_idx = S.argsort(axis=0)[-topN:, :]
  15. s_feat_neg = s_feat[neg_s_idx.flatten(''C'')]
  16. i_feat_neg = i_feat[neg_i_idx.flatten(''F'')]
  17. return i_feat_pos, s_feat_pos, i_feat_neg, s_feat_neg
项目:image-text-matching    作者:llltttppp    | 项目源码 | 文件源码
  1. def top_K_loss(self, sentence, image, K=30, margin=0.5):
  2. sim_matrix = tf.matmul(sentence, transpose_b=True)
  3. s_square = tf.reduce_sum(tf.square(sentence), axis=1)
  4. im_square = tf.reduce_sum(tf.square(image), axis=1)
  5. d = tf.reshape(s_square,[-1,1]) - 2 * sim_matrix + tf.reshape(im_square, [1, -1])
  6. positive = tf.stack([tf.matrix_diag_part(d)] * K, axis=1)
  7. length = tf.shape(d)[-1]
  8. d = tf.matrix_set_diag(d, 8 * tf.ones([length]))
  9. sen_loss_K ,_ = tf.nn.top_k(-1.0 * d, K, sorted=False) # note: this is negative value
  10. im_loss_K,_ = tf.nn.top_k(tf.transpose(-1.0 * d), sorted=False) # note: this is negative value
  11. sentence_center_loss = tf.nn.relu(positive + sen_loss_K + margin)
  12. image_center_loss = tf.nn.relu(positive + im_loss_K + margin)
  13. self.d_neg = (sen_loss_K + im_loss_K)/-2.0
  14. self.d_pos = positive
  15. self.endpoint[''debug/im_loss_topK''] = -1.0 * im_loss_K
  16. self.endpoint[''debug/sen_loss_topK''] = -1.0 * sen_loss_K
  17. self.endpoint[''debug/d_Matrix''] = d
  18. self.endpoint[''debug/positive''] = positive
  19. self.endpoint[''debug/s_center_loss''] = sentence_center_loss
  20. self.endpoint[''debug/i_center_loss''] = image_center_loss
  21. self.endpoint[''debug/S''] = sim_matrix
  22. self.endpoint[''debug/sentence_square''] = s_square
  23. self.endpoint[''debug/image_square''] = im_square
  24. return tf.reduce_sum(sentence_center_loss), tf.reduce_sum(image_center_loss)
项目:image-text-matching    作者:llltttppp    | 项目源码 | 文件源码
  1. def select_negtive(self, s_feat_neg
项目:image-text-matching    作者:llltttppp    | 项目源码 | 文件源码
  1. def top_K_loss(self, tf.reduce_sum(image_center_loss)
项目:deepmodels    作者:learningsociety    | 项目源码 | 文件源码
  1. def conv_feat_map_tensor_gram(conv_fmap_tensor):
  2. """Compute Gram matrix of conv feature maps.
  3.  
  4. Used in style transfer.
  5. """
  6. tf.assert_equal(tf.rank(conv_fmap_tensor), 4)
  7. shape = tf.shape(conv_fmap_tensor)
  8. num_images = shape[0]
  9. width = shape[1]
  10. height = shape[2]
  11. num_filters = shape[3]
  12. filters = tf.reshape(conv_fmap_tensor,
  13. tf.stack([num_images, -1, num_filters]))
  14. grams = tf.matmul(
  15. filters, filters,
  16. transpose_a=True) / tf.to_float(width * height * num_filters)
  17. return grams
项目:aiida-fleur    作者:broeder-j    | 项目源码 | 文件源码
  1. def abs_to_rel_f(vector, cell, pbc):
  2. """
  3. Converts a position vector in absolut coordinates to relative coordinates
  4. for a film system.
  5. """
  6. # Todo this currently only works if the z-coordinate is the one with no pbc
  7. # Therefore if a structure with x non pbc is given this should also work.
  8. # maybe write a ''tranform film to fleur_film routine''?
  9. if len(vector) == 3:
  10. if pbc[2] == False:
  11. # leave z coordinate absolut
  12. # convert only x and y.
  13. postionR = np.array(vector)
  14. postionR_f = np.array(postionR[:2])
  15. cell_np = np.array(cell)
  16. cell_np = np.array(cell_np[0:2, 0:2])
  17. inv_cell_np = np.linalg.inv(cell_np)
  18. new_xy = [i for i in np.matmul(postionR_f, inv_cell_np)]#np.matmul(inv_cell_np,postionR_f)]
  19. new_rel_pos_f = [new_xy[0], new_xy[1], postionR[2]]
  20. return new_rel_pos_f
  21. else:
  22. print ''FLEUR can not handle this type of film coordinate''
  23. else:
  24. return False
项目:aiida-fleur    作者:broeder-j    | 项目源码 | 文件源码
  1. def rel_to_abs_f(vector, cell):
  2. """
  3. Converts a position vector in interal coordinates to absolut coordinates
  4. in Angstroem for a film structure (2D).
  5. """
  6. # Todo this currently only works if the z-coordinate is the one with no pbc
  7. # Therefore if a structure with x non pbc is given this should also work.
  8. # maybe write a ''tranform film to fleur_film routine''?
  9. if len(vector) == 3:
  10. postionR = np.array(vector)
  11. postionR_f = np.array(postionR[:2])
  12. #print postionR_f
  13. cell_np = np.array(cell)
  14. cell_np = np.array(cell_np[0:2, 0:2])
  15. #print cell_np
  16. new_xy = [i for i in np.matmul(postionR_f, cell_np)]
  17. new_abs_pos_f = [new_xy[0], postionR[2]]
  18. return new_abs_pos_f
  19. else:
  20. return False
项目:krpcScripts    作者:jwvanderbeck    | 项目源码 | 文件源码
  1. def test_exceptions(self):
  2. dims = [
  3. ((1, b)
项目:krpcScripts    作者:jwvanderbeck    | 项目源码 | 文件源码
  1. def test_shapes(self):
  2. dims = [
  3. ((1, b)
  4. assert_(np.array(c).shape == ())
项目:krpcScripts    作者:jwvanderbeck    | 项目源码 | 文件源码
  1. def test_numpy_ufunc_override(self):
  2. # 2016-01-29: NUMPY_UFUNC_disABLED
  3. return
  4.  
  5. class A(np.ndarray):
  6. def __new__(cls, c)
项目:vulk    作者:realitix    | 项目源码 | 文件源码
  1. def mul(self, matrix):
  2. ''''''Multiply this matrix by `matrix`
  3. The order of operation is: `this @ matrix`.
  4.  
  5. *Parameters:*
  6.  
  7. - `matrix`: `Matrix4`
  8. ''''''
  9. # Make a matrix4 shape to matmul function
  10. view1 = np.reshape(self._values, (4, 4))
  11. view2 = np.reshape(matrix.values, 4))
  12. self.tmp.shape = (4, 4)
  13.  
  14. # np.matmul(view2,view1,out=out)
  15. np.matmul(view2, view1, out=self.tmp)
  16.  
  17. self.tmp.shape = (16,)
  18. self._values[:] = self.tmp
  19.  
  20. return self
项目:Sohu-LuckData-Image-Text-Matching-Competition    作者:WeitaoVan    | 项目源码 | 文件源码
  1. def select_negtive(self, topN=50):
  2. ''''''
  3. Select the triplets with the largest losses \\n
  4. return i_feat_pos,s_feat_neg
  5. ''''''
  6. Feed_dict = {self.image_feat: i_feat, self.sentence_feat:s_feat}
  7. i_embed, Feed_dict=Feed_dict)
  8. S = np.matmul(i_embed, s_embed.T)
  9. i_feat_pos = i_feat.repeat(topN, axis=0)
  10. s_feat_pos = s_feat.repeat(topN, axis=0)
  11. N = S.shape[0]
  12. np.fill_diagonal(S, -2*np.ones(N))
  13. neg_s_idx = S.argsort(axis=1)[:, -topN:]
  14. neg_i_idx = S.argsort(axis=0)[-topN:, :]
  15. s_feat_neg = s_feat[neg_s_idx.flatten(''C'')]
  16. i_feat_neg = i_feat[neg_i_idx.flatten(''F'')]
  17. return i_feat_pos, s_feat_neg
项目:Sohu-LuckData-Image-Text-Matching-Competition    作者:WeitaoVan    | 项目源码 | 文件源码
  1. def select_negtive(self, s_feat_neg
项目:robotics1project    作者:pchorak    | 项目源码 | 文件源码
  1. def get_xyz(interface, xyz_from_camera):
  2. angles = interface.current_status.angles[0:3]
  3.  
  4. # Get current XYZ
  5. P0t = dobotModel.forward_kinematics(angles)
  6.  
  7. # Getting Desired XYZ of end effector
  8. Pct = np.array(CAMERA_OFFSET)
  9. R0t = dobotModel.R0T(angles)
  10. Rtc = np.array([[0, 0], [0, -1]])
  11. R0c = np.matmul(R0t, Rtc)
  12.  
  13. Pta = np.matmul(R0c, xyz_from_camera) - np.matmul(R0c, Pct)
  14. target = np.reshape(Pta, 1)) + np.reshape(P0t, 1))
  15. return target
  16.  
  17.  
  18. # FUNCTION: Touch - Place the end effector on top of an AR tag
  19. # AR TAGS: DUCKY = 0 DUCKYBOT = 1 OBSTACLE = 2
项目:paradox    作者:ictxiangxin    | 项目源码 | 文件源码
  1. def __compute_valid_convolution_nd(data, kernel, dimension: int):
  2. convolution_shape = tuple(data.shape[i] - kernel.shape[i] + 1 for i in range(-1, -dimension - 1, -1))
  3. list_dimension = reduce(lambda a, b: a * b, convolution_shape)
  4. data_prefix = data.shape[:-dimension]
  5. kernel_flat = kernel.ravel()
  6. data_flat = numpy.zeros(data_prefix + (list_dimension, len(kernel_flat)))
  7. for i in range(list_dimension):
  8. tensor_slice_start = [0] * len(kernel.shape)
  9. tensor_slice = [slice(None)] * len(data.shape)
  10. tensor_slice_start[-1] = i
  11. for r in range(-1, -len(kernel.shape) - 1, -1):
  12. dimension_scale = data.shape[r] - kernel.shape[r] + 1
  13. if tensor_slice_start[r] >= dimension_scale:
  14. tensor_slice_start[r + 1] = tensor_slice_start[r] // dimension_scale
  15. tensor_slice_start[r] %= dimension_scale
  16. tensor_slice[r] = slice(tensor_slice_start[r], tensor_slice_start[r] + kernel.shape[r])
  17. sub_convolution_index = (slice(None),) * (len(data.shape) - dimension) + tuple([i, slice(None)])
  18. data_flat[sub_convolution_index] = data[tensor_slice].reshape(data_prefix + (reduce(lambda a, kernel.shape),))
  19. convolution_flat = numpy.matmul(data_flat, numpy.flip(kernel_flat, axis=0))
  20. convolution_nd = convolution_flat.reshape(data_prefix + convolution_shape)
  21. return convolution_nd
项目:aurora    作者:upul    | 项目源码 | 文件源码
  1. def test_matmul_two_vars():
  2. x2 = ad.Variable(name=''x2'')
  3. x3 = ad.Variable(name=''x3'')
  4. y = ad.matmul(x2, x3)
  5.  
  6. grad_x2, grad_x3 = ad.gradients(y, [x2, x3])
  7. executor = ad.Executor([y, grad_x2, grad_x3])
  8. x2_val = np.array([[1, 2], [3, 4], [5, 6]]) # 3x2
  9. x3_val = np.array([[7, 8, 9], [10, 11, 12]]) # 2x3
  10.  
  11. y_val, grad_x2_val, grad_x3_val = executor.run(Feed_shapes={x2: x2_val, x3: x3_val})
  12.  
  13. expected_yval = np.matmul(x2_val, x3_val)
  14. expected_grad_x2_val = np.matmul(np.ones_like(expected_yval), np.transpose(x3_val))
  15. expected_grad_x3_val = np.matmul(np.transpose(x2_val), np.ones_like(expected_yval))
  16.  
  17. assert isinstance(y, ad.Node)
  18. assert np.array_equal(y_val, expected_yval)
  19. assert np.array_equal(grad_x2_val, expected_grad_x2_val)
  20. assert np.array_equal(grad_x3_val, expected_grad_x3_val)
项目:aurora    作者:upul    | 项目源码 | 文件源码
  1. def test_matmul_var_and_param():
  2. x2 = ad.Variable(name="x2")
  3. w2_val = np.array([[7, 12]]) # 2x3
  4. w2 = ad.Parameter(name="w2", init=w2_val)
  5. y = ad.matmul(x2, w2)
  6.  
  7. grad_x2, grad_w2 = ad.gradients(y, w2])
  8.  
  9. executor = ad.Executor([y, grad_w2])
  10. x2_val = np.array([[1, 6]]) # 3x2
  11.  
  12. y_val, grad_w2_val = executor.run(Feed_shapes={x2: x2_val})
  13.  
  14. expected_yval = np.matmul(x2_val, w2_val)
  15. expected_grad_x2_val = np.matmul(np.ones_like(expected_yval), np.transpose(w2_val))
  16. expected_grad_x3_val = np.matmul(np.transpose(x2_val), ad.Node)
  17. # assert np.array_equal(y_val,expected_yval)
  18. # assert np.array_equal(grad_x2_val,expected_grad_x2_val)
  19. # assert np.array_equal(grad_w2_val,expected_grad_x3_val)
项目:Sisyphus    作者:davidbrandfonbrener    | 项目源码 | 文件源码
  1. def output_step_scan(self, dummy, new_state):
  2.  
  3. if self.Dale_ratio:
  4. new_output = tf.matmul(
  5. tf.nn.relu(new_state),
  6. tf.matmul(
  7. tf.abs(self.W_out) * self.output_Connectivity,
  8. self.Dale_out,
  9. name="in_2"),
  10. transpose_b=True, name="3")\\
  11. + self.b_out
  12.  
  13. else:
  14. new_output = tf.matmul(tf.nn.relu(new_state), self.W_out * self.output_Connectivity,
  15. transpose_b=True, name="3") + self.b_out
  16.  
  17. return new_output
项目:learning-rank-public    作者:andreweskeclarke    | 项目源码 | 文件源码
  1. def gradient(x0, X, y, alpha):
  2. # gradient of the logistic loss
  3.  
  4. w, c = x0[1:137], x0[0]
  5.  
  6. #print("c is " + str(c))
  7. z = X.dot(w) + c
  8. z = phi(y * z)
  9. z0 = (z - 1) * y
  10. grad_w = np.matmul(z0,X) / X.shape[0] + alpha * w
  11. grad_c = z0.sum() / X.shape[0]
  12.  
  13. grad_c = np.array(grad_c)
  14. #print(grad_w[0,1:5])
  15. return np.c_[([grad_c], grad_w)]
  16.  
  17.  
  18. ##### stochastic Gradient Descent Optimiser ######
项目:learning-rank-public    作者:andreweskeclarke    | 项目源码 | 文件源码
  1. def gradient(x0, grad_w)]
  2.  
  3.  
  4. ##### stochastic Gradient Descent Optimiser ######
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def test_exceptions(self):
  2. dims = [
  3. ((1, b)
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def test_shapes(self):
  2. dims = [
  3. ((1, b)
  4. assert_(np.array(c).shape == ())
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def test_numpy_ufunc_override(self):
  2. # Temporarily disable __numpy_ufunc__ for 1.10; see gh-5844
  3. return
  4.  
  5. class A(np.ndarray):
  6. def __new__(cls, c)
项目:pyMHT    作者:erikliland    | 项目源码 | 文件源码
  1. def precalc(C, R, x_bar_list, P_bar_list):
  2. assert C.ndim == 2
  3. assert R.ndim == 2
  4.  
  5. nMeasurement, nStates = x_bar_list.shape
  6. nObservableState = C.shape[0]
  7.  
  8. z_hat_list = C.dot(x_bar_list.T).T
  9. S_list = np.matmul(np.matmul(C, P_bar_list), C.T) + R
  10. S_inv_list = np.linalg.inv(S_list)
  11. K_list = np.matmul(np.matmul(P_bar_list, C.T), S_inv_list)
  12. P_hat_list = P_bar_list - np.matmul(K_list.dot(C), P_bar_list)
  13.  
  14. assert z_hat_list.shape == (nMeasurement, nObservableState), "z_hat ERROR"
  15. assert S_list.shape == (nMeasurement, nObservableState, "S ERROR"
  16. assert S_inv_list.shape == S_list.shape, "S_inv ERROR"
  17. assert K_list.shape == (nMeasurement, nStates, nObservableState)
  18. assert P_hat_list.shape == P_bar_list.shape, "P_hat ERROR"
  19.  
  20. return z_hat_list, S_list, S_inv_list, K_list, P_hat_list
项目:information-dropout    作者:ucla-vision    | 项目源码 | 文件源码
  1. def correlation(task,load=True):
  2. self = mytask
  3. if load:
  4. self.initialize(_load=True, _logging=False, _log_dir=''other/'')
  5. data = []
  6. for batch in self.iterate_minibatches(''valid''):
  7. xtrain, ytrain = batch
  8. ytrain = np.eye(10)[ytrain]
  9. Feed_dict = {self.x: xtrain, self.y: ytrain, self.sigma0: 1., self.initial_keep_prob: task[''initial_keep_prob''], self.is_training: False}
  10. z = tf.get_collection(''log_network'')[-1]
  11. batch_z = self.sess.run( z, Feed_dict)
  12. data.append(batch_z)
  13. data = np.vstack(data)
  14. data = data.reshape(data.shape[0],-1)
  15. def normal_tc(c0):
  16. c1i = np.diag(1./np.diag(c0))
  17. p = np.matmul(c1i,c0)
  18. return - .5 * np.linalg.slogdet(p)[1] / c0.shape[0]
  19. c0 = np.cov( data, rowvar=False )
  20. tc = normal_tc(c0)
  21. print "Total correlation: %f" % tc
项目:aws-lambda-numpy    作者:vitolimandibhrata    | 项目源码 | 文件源码
  1. def test_exceptions(self):
  2. dims = [
  3. ((1, b)

einsum 和 matmul

einsum 和 matmul

如何解决einsum 和 matmul

相关问题BLAS with symmetry in higher order tensor in Fortran

我尝试使用 python 代码来利用张量收缩中的对称性, A[a,b] B[b,c,d] = C[a,d] B[b,d] = B[b,d,c] 因此 C[a,c]。 (假设爱因斯坦求和约定,即重复的 b 表示对其求和)

通过以下代码

  1. import numpy as np
  2. import time
  3. # A[a,b] * B[b,d]
  4. na = nb = nc = nd = 100
  5. A = np.random.random((na,nb))
  6. B = np.random.random((nb,nc,nd))
  7. C = np.zeros((na,nd))
  8. C2= np.zeros((na,nd))
  9. C3= np.zeros((na,nd))
  10. # symmetrize B
  11. for c in range(nc):
  12. for d in range(c):
  13. B[:,d] = B[:,c]
  14. start_time = time.time()
  15. C2 = np.einsum(''ab,bcd->acd'',A,B)
  16. finish_time = time.time()
  17. print(''time einsum'',finish_time - start_time )
  18. start_time = time.time()
  19. for c in range(nc):
  20. # c+1 is needed,since range(0) will be skipped
  21. for d in range(c+1):
  22. #C3[:,d] = np.einsum(''ab,b->a'',A[:,:],B[:,d] )
  23. C3[:,d] = np.matmul(A[:,d] )
  24. for c in range(nc):
  25. for d in range(c+1,nd):
  26. C3[:,d] = C3[:,c]
  27. finish_time = time.time()
  28. print( ''time partial einsum'',finish_time - start_time )
  29. for a in range(int(na/10)):
  30. for c in range(int(nc/10)):
  31. for d in range(int(nd/10)):
  32. if abs((C3-C2)[a,d])> 1.0e-12:
  33. print(''warning'',a,(C3-C2)[a,d])

在我看来 np.matmulnp.einsum 快,例如,通过使用 np.matmul,我得到了

  1. time einsum 0.07406115531921387
  2. time partial einsum 0.0553278923034668

通过使用 np.einsum,我得到了

  1. time einsum 0.0751657485961914
  2. time partial einsum 0.11624622344970703

上面的性能差异是不是一般?我经常认为 einsum 是理所当然的。

解决方法

作为一般规则,我希望 matmul 更快,但在更简单的情况下,einsum 似乎实际上使用了 matmul

但这里是我的时间

  1. In [20]: C2 = np.einsum(''ab,bcd->acd'',A,B)
  2. In [21]: timeit C2 = np.einsum(''ab,B)
  3. 126 ms ± 1.3 ms per loop (mean ± std. dev. of 7 runs,10 loops each)

你的对称性尝试einsum

  1. In [22]: %%timeit
  2. ...: for c in range(nc):
  3. ...: # c+1 is needed,since range(0) will be skipped
  4. ...: for d in range(c+1):
  5. ...: C3[:,c,d] = np.einsum(''ab,b->a'',A[:,:],B[:,d] )
  6. ...: #C3[:,d] = np.matmul(A[:,d] )
  7. ...:
  8. ...: for c in range(nc):
  9. ...: for d in range(c+1,nd):
  10. ...: C3[:,d] = C3[:,d,c]
  11. ...:
  12. 128 ms ± 3.39 ms per loop (mean ± std. dev. of 7 runs,10 loops each)

matmul 相同:

  1. In [23]: %%timeit
  2. ...: for c in range(nc):
  3. ...: # c+1 is needed,since range(0) will be skipped
  4. ...: for d in range(c+1):
  5. ...: #C3[:,d] )
  6. ...: C3[:,c]
  7. ...:
  8. 81.3 ms ± 1.14 ms per loop (mean ± std. dev. of 7 runs,10 loops each)

直接matmul

  1. In [24]: C4 = np.matmul(A,B.reshape(100,-1)).reshape(100,100,100)
  2. In [25]: np.allclose(C2,C4)
  3. Out[25]: True
  4. In [26]: timeit C4 = np.matmul(A,100)
  5. 14.9 ms ± 167 µs per loop (mean ± std. dev. of 7 runs,100 loops each)

einsum 也有一个 optimize 标志。我认为只有 3 个或更多参数才重要,但它似乎在这里有帮助:

  1. In [27]: timeit C2 = np.einsum(''ab,B,optimize=True)
  2. 20.3 ms ± 688 µs per loop (mean ± std. dev. of 7 runs,10 loops each)

有时当数组非常大时,某些迭代会更快,因为它降低了内存管理的复杂性。但是我认为在尝试利用对称性时不值得。其他 SO 表明,在某些情况下 matmul 可以检测对称性,并使用自定义 BLAS 调用,但我认为这里不是这种情况(它无法检测 {{1}没有昂贵的比较。)

InvalidArgumentError:无法计算MatMul,因为输入#0(从零开始)应为浮点张量,但为双张量[Op:MatMul]

InvalidArgumentError:无法计算MatMul,因为输入#0(从零开始)应为浮点张量,但为双张量[Op:MatMul]

有人可以解释一下,TensorFlow的急切模式如何工作?我正在尝试建立一个简单的回归,如下所示:

import tensorflow as tftfe = tf.contrib.eagertf.enable_eager_execution()import numpy as npdef make_model():    net = tf.keras.Sequential()    net.add(tf.keras.layers.Dense(4, activation=''relu''))    net.add(tf.keras.layers.Dense(1))    return netdef compute_loss(pred, actual):    return tf.reduce_mean(tf.square(tf.subtract(pred, actual)))def compute_gradient(model, pred, actual):    """compute gradients with given noise and input"""    with tf.GradientTape() as tape:        loss = compute_loss(pred, actual)    grads = tape.gradient(loss, model.variables)    return grads, lossdef apply_gradients(optimizer, grads, model_vars):    optimizer.apply_gradients(zip(grads, model_vars))model = make_model()optimizer = tf.train.AdamOptimizer(1e-4)x = np.linspace(0,1,1000)y = x+np.random.normal(0,0.3,1000)y = y.astype(''float32'')train_dataset = tf.data.Dataset.from_tensor_slices((y.reshape(-1,1)))epochs = 2# 10batch_size = 25itr = y.shape[0] // batch_sizefor epoch in range(epochs):    for data in tf.contrib.eager.Iterator(train_dataset.batch(25)):        preds = model(data)        grads, loss = compute_gradient(model, preds, data)        print(grads)        apply_gradients(optimizer, grads, model.variables)#         with tf.GradientTape() as tape:#             loss = tf.sqrt(tf.reduce_mean(tf.square(tf.subtract(preds, data))))#         grads = tape.gradient(loss, model.variables)#         print(grads)#         optimizer.apply_gradients(zip(grads, model.variables),global_step=None)

Gradient output: [None, None, None, None, None, None] 错误如下:

----------------------------------------------------------------------ValueError                           Traceback (most recent call last)<ipython-input-3-a589b9123c80> in <module>     35         grads, loss = compute_gradient(model, preds, data)     36         print(grads)---> 37         apply_gradients(optimizer, grads, model.variables)     38 #         with tf.GradientTape() as tape:     39 #             loss = tf.sqrt(tf.reduce_mean(tf.square(tf.subtract(preds, data))))<ipython-input-3-a589b9123c80> in apply_gradients(optimizer, grads, model_vars)     17      18 def apply_gradients(optimizer, grads, model_vars):---> 19     optimizer.apply_gradients(zip(grads, model_vars))     20      21 model = make_model()~/anaconda3/lib/python3.6/site-packages/tensorflow/python/training/optimizer.py in apply_gradients(self, grads_and_vars, global_step, name)    589     if not var_list:    590       raise ValueError("No gradients provided for any variable: %s." %--> 591                        ([str(v) for _, v, _ in converted_grads_and_vars],))    592     with ops.init_scope():    593       self._create_slots(var_list)ValueError: No gradients provided for any variable:

编辑

我更新了我的代码。现在,问题出在梯度计算上,它返回零。我已经检查了非零的损失值。

答案1

小编典典

第1部分:
问题确实是您输入的数据类型。默认情况下,您的keras模型期望float32,但您传递的是float64。您可以更改模型的dtype或将输入更改为float32。

更改模型:

def make_model():    net = tf.keras.Sequential()    net.add(tf.keras.layers.Dense(4, activation=''relu'', dtype=''float32''))    net.add(tf.keras.layers.Dense(4, activation=''relu''))    net.add(tf.keras.layers.Dense(1))    return net

更改输入: y = y.astype(''float32'')

第2部分:
您需要model(data)在tf.GradientTape()下调用用于计算模型的函数(即)。例如,您可以compute_loss使用以下方法替换您的方法:

def compute_loss(model, x, y):    pred = model(x)    return tf.reduce_mean(tf.square(tf.subtract(pred, y)))

matMul 中的错误:形状为 684,1 和 2,1 且 transposeA=false 和 transposeB=false 的张量的内部形状 (1) 和 (2) 必须匹配

matMul 中的错误:形状为 684,1 和 2,1 且 transposeA=false 和 transposeB=false 的张量的内部形状 (1) 和 (2) 必须匹配

如何解决matMul 中的错误:形状为 684,1 和 2,1 且 transposeA=false 和 transposeB=false 的张量的内部形状 (1) 和 (2) 必须匹配

我是 AI 和 tensorflow.js 的完全初学者。目前正在学习 Stephen Grider 的机器学习课程。我应该在下面的代码之后得到一个输出,但我得到了错误。请帮忙:

代码: 线性回归.js:

  1. const tf = require(''@tensorflow/tfjs'');
  2. class LinearRegression {
  3. constructor(features,labels,options) {
  4. this.features = tf.tensor(features);
  5. this.labels = tf.tensor(labels);
  6. this.features = tf.ones([this.features.shape[0],1]).concat(this.features) //generates the column of one for the horse power
  7. this.options = Object.assign(
  8. { learningRate: 0.1,iterations: 1000 },options
  9. ); //default value is 0.1,if the learning rate is provided,the value is overrided... iteration no. of times gradient decent runs
  10. this.weights = tf.zeros([2,1]); // intial tensor of both m and b are zeros
  11. }
  12. gradientDescent() {
  13. const currentGuesses = this.features.matMul(this.weights); //matMul is matrix multiplication which is features * weights
  14. const differences = currentGuesses.sub(this.labels); //(features * weights) - labels
  15. const slopes = this.features
  16. .transpose()
  17. .matMul(differences)
  18. .div(features.shape[0]); // slope of MSE with respect to both m and b. features * ((features * weights) - labels) / total no. of features.
  19. this.weights = this.weights.sub(slopes.mul(this.options.learningRate));
  20. }
  21. train() {
  22. for (let i=0; i < this.options.iterations; i++) {
  23. this.gradientDescent();
  24. }
  25. /*test(testFeatures,testLabels) {
  26. testFeatures = tf.tensor(testFeatures);
  27. testLabels = tf.tensor(testLabels);
  28. } */
  29. }
  30. }
  31. module.exports = LinearRegression;

index.js:

  1. require(''@tensorflow/tfjs-node'');
  2. const tf = require(''@tensorflow/tfjs'');
  3. const loadCSV = require(''./load-csv'');
  4. const LinearRegression = require(''./linear-regression'');
  5. let { features,testFeatures,testLabels } =loadCSV(''./cars.csv'',{
  6. shuffle: true,splitTest: 50,dataColumns: [''horsepower''],labelColumns: [''mpg'']
  7. });
  8. const regression = new LinearRegression(features,{
  9. learningRate: 0.002,iterations: 100
  10. });
  11. regression.train();
  12. console.log(
  13. ''Updated M is:'',regression.weights.get(1,0),''Updated B is:'',regression.weights.get(0,0)
  14. );

错误:

  1. D:\\Application Development\\MLKits-master\\MLKits-master\\regressions\\node_modules\\@tensorflow\\tfjs-core\\dist\\ops\\operation.js:32
  2. throw ex;
  3. ^
  4. Error: Error in matMul: inner shapes (1) and (2) of Tensors with shapes 684,1 and 2,1 and transposeA=false and transposeB=false must match.
  5. at Object.assert (D:\\Application Development\\MLKits-master\\MLKits-master\\regressions\\node_modules\\@tensorflow\\tfjs-core\\dist\\util.js:36:15)
  6. at matMul_ (D:\\Application Development\\MLKits-master\\MLKits-master\\regressions\\node_modules\\@tensorflow\\tfjs-core\\dist\\ops\\matmul.js:25:10)
  7. at Object.matMul (D:\\Application Development\\MLKits-master\\MLKits-master\\regressions\\node_modules\\@tensorflow\\tfjs-core\\dist\\ops\\operation.js:23:29)
  8. at Tensor.matMul (D:\\Application Development\\MLKits-master\\MLKits-master\\regressions\\node_modules\\@tensorflow\\tfjs-core\\dist\\tensor.js:315:26)
  9. at LinearRegression.gradientDescent (D:\\Application Development\\MLKits-master\\MLKits-master\\regressions\\linear-regression.js:19:46)
  10. at LinearRegression.train (D:\\Application Development\\MLKits-master\\MLKits-master\\regressions\\linear-regression.js:34:18)
  11. at Object.<anonymous> (D:\\Application Development\\MLKits-master\\MLKits-master\\regressions\\index.js:18:12)
  12. at Module._compile (internal/modules/cjs/loader.js:1063:30)
  13. at Object.Module._extensions..js (internal/modules/cjs/loader.js:1092:10)
  14. at Module.load (internal/modules/cjs/loader.js:928:32)

解决方法

错误是由

抛出的

this.features.matMul(this.weights)

形状this.features[684,1] 和形状this.weights[2,1] 之间存在矩阵乘法。为了能够将矩阵 A(形状 [a,b])与 B(形状 [c,d])相乘,bc 应该匹配,但此处并非如此。

要解决这里的问题,this.weights 应该转置

  1. this.features.matMul(this.weights,false,true)

matmul:输入操作数 1 不匹配

matmul:输入操作数 1 不匹配

如何解决matmul:输入操作数 1 不匹配

显示:

matmul:输入操作数 1 在其核心维度 0 中存在不匹配,其中 gufunc 签名 (n?,k),(k,m?)->(n?,m?)(大小 5 不同于 1

当我跑步时:

  1. pre = lm.predict(y_test)

请建议做什么

关于Python numpy 模块-matmul() 实例源码python numpy.mat的介绍现已完结,谢谢您的耐心阅读,如果想了解更多关于einsum 和 matmul、InvalidArgumentError:无法计算MatMul,因为输入#0(从零开始)应为浮点张量,但为双张量[Op:MatMul]、matMul 中的错误:形状为 684,1 和 2,1 且 transposeA=false 和 transposeB=false 的张量的内部形状 (1) 和 (2) 必须匹配、matmul:输入操作数 1 不匹配的相关知识,请在本站寻找。

本文标签: