GVKun编程网logo

Python numpy 模块-promote_types() 实例源码(python中numpy模块)

4

以上就是给各位分享Pythonnumpy模块-promote_types()实例源码,其中也会对python中numpy模块进行解释,同时本文还将给你拓展Jupyter中的Numpy在打印时出错(Py

以上就是给各位分享Python numpy 模块-promote_types() 实例源码,其中也会对python中numpy模块进行解释,同时本文还将给你拓展Jupyter 中的 Numpy 在打印时出错(Python 版本 3.8.8):TypeError: 'numpy.ndarray' object is not callable、numpy.random.random & numpy.ndarray.astype & numpy.arange、numpy.ravel()/numpy.flatten()/numpy.squeeze()、Numpy:数组创建 numpy.arrray() , numpy.arange()、np.linspace ()、数组基本属性等相关知识,如果能碰巧解决你现在面临的问题,别忘了关注本站,现在开始吧!

本文目录一览:

Python numpy 模块-promote_types() 实例源码(python中numpy模块)

Python numpy 模块-promote_types() 实例源码(python中numpy模块)

Python numpy 模块,promote_types() 实例源码

我们从Python开源项目中,提取了以下36个代码示例,用于说明如何使用numpy.promote_types()

项目:radar    作者:amoose136    | 项目源码 | 文件源码
  1. def test_promote_types_endian(self):
  2. # promote_types should always return native-endian types
  3. assert_equal(np.promote_types(''<i8'', ''<i8''), np.dtype(''i8''))
  4. assert_equal(np.promote_types(''>i8'', ''>i8''), np.dtype(''i8''))
  5.  
  6. assert_equal(np.promote_types(''>i8'', ''>U16''), np.dtype(''U21''))
  7. assert_equal(np.promote_types(''<i8'', ''<U16''), np.dtype(''U21''))
  8. assert_equal(np.promote_types(''>U16'', np.dtype(''U21''))
  9. assert_equal(np.promote_types(''<U16'', np.dtype(''U21''))
  10.  
  11. assert_equal(np.promote_types(''<S5'', ''<U8''), np.dtype(''U8''))
  12. assert_equal(np.promote_types(''>S5'', ''>U8''), np.dtype(''U8''))
  13. assert_equal(np.promote_types(''<U8'', ''<S5''), np.dtype(''U8''))
  14. assert_equal(np.promote_types(''>U8'', ''>S5''), np.dtype(''U8''))
  15. assert_equal(np.promote_types(''<U5'', ''>U5''), np.dtype(''U8''))
  16.  
  17. assert_equal(np.promote_types(''<M8'', ''<M8''), np.dtype(''M8''))
  18. assert_equal(np.promote_types(''>M8'', ''>M8''), np.dtype(''M8''))
  19. assert_equal(np.promote_types(''<m8'', ''<m8''), np.dtype(''m8''))
  20. assert_equal(np.promote_types(''>m8'', ''>m8''), np.dtype(''m8''))
项目:krpcScripts    作者:jwvanderbeck    | 项目源码 | 文件源码
  1. def test_promote_types_endian(self):
  2. # promote_types should always return native-endian types
  3. assert_equal(np.promote_types(''<i8'', np.dtype(''m8''))
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def test_promote_types_endian(self):
  2. # promote_types should always return native-endian types
  3. assert_equal(np.promote_types(''<i8'', np.dtype(''m8''))
项目:aws-lambda-numpy    作者:vitolimandibhrata    | 项目源码 | 文件源码
  1. def test_promote_types_endian(self):
  2. # promote_types should always return native-endian types
  3. assert_equal(np.promote_types(''<i8'', np.dtype(''m8''))
项目:fastmat    作者:EMS-TU-Ilmenau    | 项目源码 | 文件源码
  1. def testGram(test):
  2. instance, reference=test[TEST.INSTANCE], test[TEST.REFERENCE]
  3.  
  4. # usually expect the normalized matrix to be promoted in type complexity
  5. # due to division by column-norm during the process. However there exist
  6. # matrices that treat the problem differently. Exclude the expected pro-
  7. # motion for them.
  8. query=({} if isinstance(instance, (Diag, Eye, Zero))
  9. else {TEST.TYPE_PROMOTION: np.float32})
  10.  
  11. # account for "extra computation stage" in gram
  12. query[TEST.TOL_POWER]=test.get(TEST.TOL_POWER, 1.) * 2
  13.  
  14. query[TEST.RESULT_OUTPUT]=instance.gram.array
  15. query[TEST.RESULT_REF]=reference.astype(
  16. np.promote_types(np.float32, reference.dtype)).T.conj().dot(reference)
  17.  
  18. # ignore actual type of generated gram:
  19. query[TEST.CHECK_DATATYPE]=False
  20.  
  21. return compareResults(test, query)
  22.  
  23.  
  24. ################################################## test: T (property)
项目:lambda-numba    作者:rlhotovy    | 项目源码 | 文件源码
  1. def test_promote_types_endian(self):
  2. # promote_types should always return native-endian types
  3. assert_equal(np.promote_types(''<i8'', np.dtype(''m8''))
项目:deliver    作者:orchestor    | 项目源码 | 文件源码
  1. def test_promote_types_endian(self):
  2. # promote_types should always return native-endian types
  3. assert_equal(np.promote_types(''<i8'', np.dtype(''m8''))
项目:OpenMDAO    作者:OpenMDAO    | 项目源码 | 文件源码
  1. def test_jacobian_set_item(self, dtypes, shapes):
  2.  
  3. shape, constructor, expected_shape = shapes
  4. dtype, value = dtypes
  5.  
  6. prob = Problem(model=Group())
  7. comp = ExplicitSetItemComp(dtype, value, shape, constructor)
  8. prob.model.add_subsystem(''C1'', comp)
  9. prob.setup(check=False)
  10.  
  11. prob.set_solver_print(level=0)
  12. prob.run_model()
  13. prob.model.run_apply_nonlinear()
  14. prob.model.run_linearize()
  15.  
  16. expected = constructor(value)
  17. with prob.model._subsystems_allprocs[0].jacobian_context() as J:
  18. jac_out = J[''out'', ''in''] * -1
  19.  
  20. self.assertEqual(len(jac_out.shape), 2)
  21. expected_dtype = np.promote_types(dtype, float)
  22. self.assertEqual(jac_out.dtype, expected_dtype)
  23. assert_rel_error(self, jac_out, np.atleast_2d(expected).reshape(expected_shape), 1e-15)
项目:Alfred    作者:jkachhadia    | 项目源码 | 文件源码
  1. def test_promote_types_endian(self):
  2. # promote_types should always return native-endian types
  3. assert_equal(np.promote_types(''<i8'', np.dtype(''m8''))
项目:radar    作者:amoose136    | 项目源码 | 文件源码
  1. def test_promote_types_strings(self):
  2. assert_equal(np.promote_types(''bool'', ''S''), np.dtype(''S5''))
  3. assert_equal(np.promote_types(''b'', np.dtype(''S4''))
  4. assert_equal(np.promote_types(''u1'', np.dtype(''S3''))
  5. assert_equal(np.promote_types(''u2'', np.dtype(''S5''))
  6. assert_equal(np.promote_types(''u4'', np.dtype(''S10''))
  7. assert_equal(np.promote_types(''u8'', np.dtype(''S20''))
  8. assert_equal(np.promote_types(''i1'', np.dtype(''S4''))
  9. assert_equal(np.promote_types(''i2'', np.dtype(''S6''))
  10. assert_equal(np.promote_types(''i4'', np.dtype(''S11''))
  11. assert_equal(np.promote_types(''i8'', np.dtype(''S21''))
  12. assert_equal(np.promote_types(''bool'', ''U''), np.dtype(''U5''))
  13. assert_equal(np.promote_types(''b'', np.dtype(''U4''))
  14. assert_equal(np.promote_types(''u1'', np.dtype(''U3''))
  15. assert_equal(np.promote_types(''u2'', np.dtype(''U5''))
  16. assert_equal(np.promote_types(''u4'', np.dtype(''U10''))
  17. assert_equal(np.promote_types(''u8'', np.dtype(''U20''))
  18. assert_equal(np.promote_types(''i1'', np.dtype(''U4''))
  19. assert_equal(np.promote_types(''i2'', np.dtype(''U6''))
  20. assert_equal(np.promote_types(''i4'', np.dtype(''U11''))
  21. assert_equal(np.promote_types(''i8'', np.dtype(''U21''))
  22. assert_equal(np.promote_types(''bool'', ''S1''), np.dtype(''S5''))
  23. assert_equal(np.promote_types(''bool'', ''S30''), np.dtype(''S30''))
  24. assert_equal(np.promote_types(''b'', np.dtype(''S4''))
  25. assert_equal(np.promote_types(''b'', np.dtype(''S30''))
  26. assert_equal(np.promote_types(''u1'', np.dtype(''S3''))
  27. assert_equal(np.promote_types(''u1'', np.dtype(''S30''))
  28. assert_equal(np.promote_types(''u2'', np.dtype(''S5''))
  29. assert_equal(np.promote_types(''u2'', np.dtype(''S30''))
  30. assert_equal(np.promote_types(''u4'', np.dtype(''S10''))
  31. assert_equal(np.promote_types(''u4'', np.dtype(''S30''))
  32. assert_equal(np.promote_types(''u8'', np.dtype(''S20''))
  33. assert_equal(np.promote_types(''u8'', np.dtype(''S30''))
项目:radar    作者:amoose136    | 项目源码 | 文件源码
  1. def test_dtype_promotion(self):
  2. # datetime <op> datetime computes the Metadata gcd
  3. # timedelta <op> timedelta computes the Metadata gcd
  4. for mM in [''m'', ''M'']:
  5. assert_equal(
  6. np.promote_types(np.dtype(mM+''8[2Y]''), np.dtype(mM+''8[2Y]'')),
  7. np.dtype(mM+''8[2Y]''))
  8. assert_equal(
  9. np.promote_types(np.dtype(mM+''8[12Y]''), np.dtype(mM+''8[15Y]'')),
  10. np.dtype(mM+''8[3Y]''))
  11. assert_equal(
  12. np.promote_types(np.dtype(mM+''8[62M]''), np.dtype(mM+''8[24M]'')),
  13. np.dtype(mM+''8[2M]''))
  14. assert_equal(
  15. np.promote_types(np.dtype(mM+''8[1W]''), np.dtype(mM+''8[2D]'')),
  16. np.dtype(mM+''8[1D]''))
  17. assert_equal(
  18. np.promote_types(np.dtype(mM+''8[W]''), np.dtype(mM+''8[13s]'')),
  19. np.dtype(mM+''8[s]''))
  20. assert_equal(
  21. np.promote_types(np.dtype(mM+''8[13W]''), np.dtype(mM+''8[49s]'')),
  22. np.dtype(mM+''8[7s]''))
  23. # timedelta <op> timedelta raises when there is no reasonable gcd
  24. assert_raises(TypeError, np.promote_types,
  25. np.dtype(''m8[Y]''), np.dtype(''m8[D]''))
  26. assert_raises(TypeError,
  27. np.dtype(''m8[M]''), np.dtype(''m8[W]''))
  28. # timedelta <op> timedelta may overflow with big unit ranges
  29. assert_raises(OverflowError,
  30. np.dtype(''m8[W]''), np.dtype(''m8[fs]''))
  31. assert_raises(OverflowError,
  32. np.dtype(''m8[s]''), np.dtype(''m8[as]''))
项目:radar    作者:amoose136    | 项目源码 | 文件源码
  1. def print_coercion_table(ntypes, inputfirstvalue, inputsecondvalue, firstarray, use_promote_types=False):
  2. print(''+'', end='' '')
  3. for char in ntypes:
  4. print(char, end='' '')
  5. print()
  6. for row in ntypes:
  7. if row == ''O'':
  8. rowtype = GenericObject
  9. else:
  10. rowtype = np.obj2sctype(row)
  11.  
  12. print(row, end='' '')
  13. for col in ntypes:
  14. if col == ''O'':
  15. coltype = GenericObject
  16. else:
  17. coltype = np.obj2sctype(col)
  18. try:
  19. if firstarray:
  20. rowvalue = np.array([rowtype(inputfirstvalue)], dtype=rowtype)
  21. else:
  22. rowvalue = rowtype(inputfirstvalue)
  23. colvalue = coltype(inputsecondvalue)
  24. if use_promote_types:
  25. char = np.promote_types(rowvalue.dtype, colvalue.dtype).char
  26. else:
  27. value = np.add(rowvalue, colvalue)
  28. if isinstance(value, np.ndarray):
  29. char = value.dtype.char
  30. else:
  31. char = np.dtype(type(value)).char
  32. except ValueError:
  33. char = ''!''
  34. except OverflowError:
  35. char = ''@''
  36. except TypeError:
  37. char = ''#''
  38. print(char, end='' '')
  39. print()
项目:pytorch_fnet    作者:AllenCellModeling    | 项目源码 | 文件源码
  1. def dtype(self):
  2. """Return dtype of image data in file."""
  3. # subblock data can be of different pixel type
  4. dtype = self.filtered_subblock_directory[0].dtype[-2:]
  5. for directory_entry in self.filtered_subblock_directory:
  6. dtype = numpy.promote_types(dtype, directory_entry.dtype[-2:])
  7. return dtype
项目:krpcScripts    作者:jwvanderbeck    | 项目源码 | 文件源码
  1. def test_promote_types_strings(self):
  2. assert_equal(np.promote_types(''bool'', np.dtype(''S30''))
项目:krpcScripts    作者:jwvanderbeck    | 项目源码 | 文件源码
  1. def test_dtype_promotion(self):
  2. # datetime <op> datetime computes the Metadata gcd
  3. # timedelta <op> timedelta computes the Metadata gcd
  4. for mM in [''m'', np.dtype(''m8[as]''))
项目:krpcScripts    作者:jwvanderbeck    | 项目源码 | 文件源码
  1. def print_coercion_table(ntypes, end='' '')
  2. print()
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def test_promote_types_strings(self):
  2. assert_equal(np.promote_types(''bool'', np.dtype(''S30''))
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def test_dtype_promotion(self):
  2. # datetime <op> datetime computes the Metadata gcd
  3. # timedelta <op> timedelta computes the Metadata gcd
  4. for mM in [''m'', np.dtype(''m8[as]''))
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def print_coercion_table(ntypes, end='' '')
  2. print()
项目:aws-lambda-numpy    作者:vitolimandibhrata    | 项目源码 | 文件源码
  1. def test_promote_types_strings(self):
  2. assert_equal(np.promote_types(''bool'', np.dtype(''S30''))
项目:aws-lambda-numpy    作者:vitolimandibhrata    | 项目源码 | 文件源码
  1. def test_dtype_promotion(self):
  2. # datetime <op> datetime computes the Metadata gcd
  3. # timedelta <op> timedelta computes the Metadata gcd
  4. for mM in [''m'', np.dtype(''m8[as]''))
项目:aws-lambda-numpy    作者:vitolimandibhrata    | 项目源码 | 文件源码
  1. def print_coercion_table(ntypes, end='' '')
  2. print()
项目:fastmat    作者:EMS-TU-Ilmenau    | 项目源码 | 文件源码
  1. def testLargestSV(test):
  2. query={TEST.TYPE_EXPECTED: np.float64}
  3. instance=test[TEST.INSTANCE]
  4.  
  5. # account for "extra computation stage" (gram) in largestSV
  6. query[TEST.TOL_POWER]=test.get(TEST.TOL_POWER, 1.) * 2
  7. query[TEST.TOL_minePS]=_getTypeEps(safeTypeExpansion(instance.dtype))
  8.  
  9. # determine reference result
  10. largestSV=np.linalg.svd(test[TEST.REFERENCE], compute_uv=False)[0]
  11. query[TEST.RESULT_REF]=np.array(
  12. largestSV, dtype=np.promote_types(largestSV.dtype, np.float64))
  13.  
  14. # largestSV may not converge fast enough for a bad random starting point
  15. # so retry some times before throwing up
  16. for tries in range(9):
  17. maxSteps=100. * 10. ** (tries / 2.)
  18. query[TEST.RESULT_OUTPUT]=np.array(
  19. instance.getLargestSV(maxSteps=maxSteps, alwaysReturn=True))
  20. result=compareResults(test, query)
  21. if result[TEST.RESULT]:
  22. break
  23. return result
  24.  
  25.  
  26. ################################################## test: gram (property)
项目:lambda-numba    作者:rlhotovy    | 项目源码 | 文件源码
  1. def test_promote_types_strings(self):
  2. assert_equal(np.promote_types(''bool'', np.dtype(''S30''))
项目:lambda-numba    作者:rlhotovy    | 项目源码 | 文件源码
  1. def test_dtype_promotion(self):
  2. # datetime <op> datetime computes the Metadata gcd
  3. # timedelta <op> timedelta computes the Metadata gcd
  4. for mM in [''m'', np.dtype(''m8[as]''))
项目:lambda-numba    作者:rlhotovy    | 项目源码 | 文件源码
  1. def print_coercion_table(ntypes, end='' '')
  2. print()
项目:deliver    作者:orchestor    | 项目源码 | 文件源码
  1. def test_promote_types_strings(self):
  2. assert_equal(np.promote_types(''bool'', np.dtype(''S30''))
项目:deliver    作者:orchestor    | 项目源码 | 文件源码
  1. def test_dtype_promotion(self):
  2. # datetime <op> datetime computes the Metadata gcd
  3. # timedelta <op> timedelta computes the Metadata gcd
  4. for mM in [''m'', np.dtype(''m8[as]''))
项目:deliver    作者:orchestor    | 项目源码 | 文件源码
  1. def print_coercion_table(ntypes, end='' '')
  2. print()
项目:Alfred    作者:jkachhadia    | 项目源码 | 文件源码
  1. def test_promote_types_strings(self):
  2. assert_equal(np.promote_types(''bool'', np.dtype(''S30''))
项目:Alfred    作者:jkachhadia    | 项目源码 | 文件源码
  1. def test_dtype_promotion(self):
  2. # datetime <op> datetime computes the Metadata gcd
  3. # timedelta <op> timedelta computes the Metadata gcd
  4. for mM in [''m'', np.dtype(''m8[as]''))
项目:Alfred    作者:jkachhadia    | 项目源码 | 文件源码
  1. def print_coercion_table(ntypes, end='' '')
  2. print()
项目:cellranger    作者:10XGenomics    | 项目源码 | 文件源码
  1. def combine_data_frame_files(output_filename, input_filenames):
  2. in_files = [ h5py.File(f, ''r'') for f in input_filenames ]
  3. column_names = [ tuple(sorted(f.attrs.get("column_names"))) for f in in_files ]
  4.  
  5. uniq = set(column_names)
  6.  
  7. if len(uniq) > 1:
  8. raise Exception("you''re attempting to combine incompatible data frames")
  9.  
  10. if len(uniq) == 0:
  11. r = "No input files? output: %s,inputs: %s" % (output_filename, str(input_filenames))
  12. raise Exception(r)
  13.  
  14. column_names = uniq.pop()
  15.  
  16. if os.path.exists(output_filename):
  17. os.remove(output_filename)
  18.  
  19. out = h5py.File(output_filename)
  20. out.attrs.create("column_names", column_names)
  21.  
  22. # Write successive columns
  23. for c in column_names:
  24. datasets = [f[c] for f in in_files if len(f[c]) > 0]
  25. num_w_levels = np.sum([has_levels(ds) for ds in datasets if len(ds) > 0])
  26. fract_w_levels = float(num_w_levels) / (len(datasets) + 1)
  27.  
  28. if fract_w_levels > 0.25:
  29. combine_level_column(out, datasets, c)
  30. continue
  31.  
  32. # filter out empty rows from the type promotion,unless they''re all empty
  33. types = [get_col_type(ds) for ds in datasets if len(ds) > 0]
  34. if len(types) == 0:
  35. # Fall back to getting column types from empty data frames
  36. types = [get_col_type(f[c]) for f in in_files]
  37. common_type = reduce(np.promote_types, types)
  38.  
  39. # numpy doesn''t understand vlen strings -- so always promote to vlen strings if anything is using them
  40. if vlen_string in types:
  41. common_type = vlen_string
  42.  
  43. out_ds = out.create_dataset(c, shape=(0,), maxshape=(None, dtype=common_type, compression=COMPRESSION, shuffle=True, chunks=(CHUNK_SIZE,))
  44.  
  45. item_count = 0
  46. for ds in datasets:
  47. new_items = ds.shape[0]
  48. out_ds.resize((item_count + new_items,))
  49. data = ds[:]
  50.  
  51. if has_levels(ds):
  52. levels = get_levels(ds)
  53. data = levels[data]
  54.  
  55. out_ds[item_count:(item_count + new_items)] = data
  56. item_count += new_items
  57.  
  58. for in_f in in_files:
  59. in_f.close()
  60.  
  61. out.close()
项目:fastmat    作者:EMS-TU-Ilmenau    | 项目源码 | 文件源码
  1. def ISTA(
  2. fmatA,
  3. arrB,
  4. numLambda=0.1,
  5. numMaxSteps=100
  6. ):
  7. ''''''
  8. Wrapper around the ISTA algrithm to allow processing of arrays of signals
  9. fmatA - input system matrix
  10. arrB - input data vector (measurements)
  11. numLambda - balancing parameter in optimization problem
  12. between data fidelity and sparsity
  13. numMaxSteps - maximum number of steps to run
  14. numL - step size during the conjugate gradient step
  15. ''''''
  16.  
  17. if len(arrB.shape) > 2:
  18. raise ValueError("Only n x m arrays are supported for ISTA")
  19.  
  20. # calculate the largest singular value to get the right step size
  21. numL = 1.0 / (fmatA.largestSV ** 2)
  22.  
  23. arrX = np.zeros(
  24. (fmatA.numM, arrB.shape[1]),
  25. dtype=np.promote_types(np.float32, arrB.dtype)
  26. )
  27.  
  28. # start iterating
  29. for numStep in range(numMaxSteps):
  30. # do the gradient step and threshold
  31.  
  32. arrStep = arrX - numL * fmatA.backward(fmatA.forward(arrX) - arrB)
  33. arrX = _softThreshold(arrStep, numL * numLambda * 0.5)
  34.  
  35. # return the unthresholded values for all non-zero support elements
  36. return np.where(arrX != 0, arrStep, arrX)
  37.  
  38.  
  39. ################################################################################
  40. ### Maintenance and Documentation
  41. ################################################################################
  42.  
  43. ################################################## inspection interface
项目:fastmat    作者:EMS-TU-Ilmenau    | 项目源码 | 文件源码
  1. def FISTA(
  2. fmatA,
  3. arrB,
  4. numLambda=0.1,
  5. numMaxSteps=100
  6. ):
  7. ''''''
  8. Wrapper around the FISTA algrithm to allow processing of arrays of signals
  9. fmatA - input system matrix
  10. arrB - input data vector (measurements)
  11. numLambda - balancing parameter in optimization problem
  12. between data fidelity and sparsity
  13. numMaxSteps - maximum number of steps to run
  14. numL - step size during the conjugate gradient step
  15. ''''''
  16.  
  17. if len(arrB.shape) > 2:
  18. raise ValueError("Only n x m arrays are supported for FISTA")
  19.  
  20. # calculate the largest singular value to get the right step size
  21. numL = 1.0 / (fmatA.largestSV ** 2)
  22. t = 1
  23. arrX = np.zeros(
  24. (fmatA.numM, arrB.dtype)
  25. )
  26. # initial arrY
  27. arrY = np.copy(arrX)
  28. # start iterating
  29. for numStep in range(numMaxSteps):
  30. arrXold = np.copy(arrX)
  31. # do the gradient step and threshold
  32. arrStep = arrY - numL * fmatA.backward(fmatA.forward(arrY) - arrB)
  33.  
  34. arrX = _softThreshold(arrStep, numL * numLambda * 0.5)
  35.  
  36. # update t
  37. tOld =t
  38. t = (1 + np.sqrt(1 + 4 * t ** 2)) / 2
  39. # update arrY
  40. arrY = arrX + ((tOld - 1) / t) * (arrX - arrXold)
  41. # return the unthresholded values for all non-zero support elements
  42. return np.where(arrX != 0, arrX)
  43.  
  44.  
  45. ################################################################################
  46. ### Maintenance and Documentation
  47. ################################################################################
  48.  
  49. ################################################## inspection interface
项目:OpenMDAO    作者:OpenMDAO    | 项目源码 | 文件源码
  1. def _set_abs(self, abs_key, subjac):
  2. """
  3. Set sub-Jacobian.
  4.  
  5. Parameters
  6. ----------
  7. abs_key : (str,str)
  8. Absolute name pair of sub-Jacobian.
  9. subjac : int or float or ndarray or sparse matrix
  10. sub-Jacobian as a scalar,vector,array,or AIJ list or tuple.
  11. """
  12. if not issparse(subjac):
  13. # np.promote_types will choose the smallest dtype that can contain both arguments
  14. subjac = np.atleast_1d(subjac)
  15. safe_dtype = np.promote_types(subjac.dtype, float)
  16. subjac = subjac.astype(safe_dtype, copy=False)
  17.  
  18. # Bail here so that we allow top level jacobians to be of reduced size when indices are
  19. # specified on driver vars.
  20. if self._override_checks:
  21. self._subjacs[abs_key] = subjac
  22. return
  23.  
  24. if abs_key in self._subjacs_info:
  25. subjac_info = self._subjacs_info[abs_key][0]
  26. rows = subjac_info[''rows'']
  27. else:
  28. rows = None
  29.  
  30. if rows is None:
  31. # Dense subjac
  32. shape = self._abs_key2shape(abs_key)
  33. subjac = np.atleast_2d(subjac)
  34. if subjac.shape == (1, 1):
  35. subjac = subjac[0, 0] * np.ones(shape, dtype=safe_dtype)
  36. else:
  37. subjac = subjac.reshape(shape)
  38.  
  39. if abs_key in self._subjacs and self._subjacs[abs_key].shape == shape:
  40. np.copyto(self._subjacs[abs_key], subjac)
  41. else:
  42. self._subjacs[abs_key] = subjac.copy()
  43. else:
  44. # Sparse subjac
  45. if subjac.shape == (1,):
  46. subjac = subjac[0] * np.ones(rows.shape, dtype=safe_dtype)
  47.  
  48. if subjac.shape != rows.shape:
  49. raise ValueError("Sub-jacobian for key %s has "
  50. "the wrong shape (%s),expected (%s)." %
  51. (abs_key, subjac.shape, rows.shape))
  52.  
  53. if abs_key in self._subjacs and subjac.shape == self._subjacs[abs_key][0].shape:
  54. np.copyto(self._subjacs[abs_key][0], subjac)
  55. else:
  56. self._subjacs[abs_key] = [subjac.copy(), rows, subjac_info[''cols'']]
  57. else:
  58. self._subjacs[abs_key] = subjac

Jupyter 中的 Numpy 在打印时出错(Python 版本 3.8.8):TypeError: 'numpy.ndarray' object is not callable

Jupyter 中的 Numpy 在打印时出错(Python 版本 3.8.8):TypeError: 'numpy.ndarray' object is not callable

如何解决Jupyter 中的 Numpy 在打印时出错(Python 版本 3.8.8):TypeError: ''numpy.ndarray'' object is not callable?

晚安, 尝试打印以下内容时,我在 jupyter 中遇到了 numpy 问题,并且得到了一个 错误: 需要注意的是python版本是3.8.8。 我先用 spyder 测试它,它运行正确,它给了我预期的结果

使用 Spyder:

import numpy as np
    for i in range (5):
        n = np.random.rand ()
    print (n)
Results
0.6604903457995978
0.8236300859753154
0.16067650689842816
0.6967868357083673
0.4231597934445466

现在有了 jupyter

import numpy as np
    for i in range (5):
        n = np.random.rand ()
    print (n)
-------------------------------------------------- ------
TypeError Traceback (most recent call last)
<ipython-input-78-0c6a801b3ea9> in <module>
       2 for i in range (5):
       3 n = np.random.rand ()
---->  4 print (n)

       TypeError: ''numpy.ndarray'' object is not callable

感谢您对我如何在 Jupyter 中解决此问题的帮助。

非常感谢您抽出宝贵时间。

阿特,约翰”

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)

numpy.random.random & numpy.ndarray.astype & numpy.arange

numpy.random.random & numpy.ndarray.astype & numpy.arange

今天看到这样一句代码:

xb = np.random.random((nb, d)).astype(''float32'') #创建一个二维随机数矩阵(nb行d列)
xb[:, 0] += np.arange(nb) / 1000. #将矩阵第一列的每个数加上一个值

要理解这两句代码需要理解三个函数

1、生成随机数

numpy.random.random(size=None) 

size为None时,返回float。

size不为None时,返回numpy.ndarray。例如numpy.random.random((1,2)),返回1行2列的numpy数组

 

2、对numpy数组中每一个元素进行类型转换

numpy.ndarray.astype(dtype)

返回numpy.ndarray。例如 numpy.array([1, 2, 2.5]).astype(int),返回numpy数组 [1, 2, 2]

 

3、获取等差数列

numpy.arange([start,]stop,[step,]dtype=None)

功能类似python中自带的range()和numpy中的numpy.linspace

返回numpy数组。例如numpy.arange(3),返回numpy数组[0, 1, 2]

numpy.ravel()/numpy.flatten()/numpy.squeeze()

numpy.ravel()/numpy.flatten()/numpy.squeeze()

numpy.ravel(a, order=''C'')

  Return a flattened array

numpy.chararray.flatten(order=''C'')

  Return a copy of the array collapsed into one dimension

numpy.squeeze(a, axis=None)

  Remove single-dimensional entries from the shape of an array.

 

相同点: 将多维数组 降为 一维数组

不同点:

  ravel() 返回的是视图(view),意味着改变元素的值会影响原始数组元素的值;

  flatten() 返回的是拷贝,意味着改变元素的值不会影响原始数组;

  squeeze()返回的是视图(view),仅仅是将shape中dimension为1的维度去掉;

 

ravel()示例:

 1 import matplotlib.pyplot as plt
 2 import numpy as np
 3 
 4 def log_type(name,arr):
 5     print("数组{}的大小:{}".format(name,arr.size))
 6     print("数组{}的维度:{}".format(name,arr.shape))
 7     print("数组{}的维度:{}".format(name,arr.ndim))
 8     print("数组{}元素的数据类型:{}".format(name,arr.dtype))
 9     #print("数组:{}".format(arr.data))
10     
11 a = np.floor(10*np.random.random((3,4)))
12 print(a)
13 log_type(''a'',a)
14 
15 a1 = a.ravel()
16 print("a1:{}".format(a1))
17 log_type(''a1'',a1)
18 a1[2] = 100
19 
20 print(a)
21 log_type(''a'',a)

 

flatten()示例

 1 import matplotlib.pyplot as plt
 2 import numpy as np
 3 
 4 def log_type(name,arr):
 5     print("数组{}的大小:{}".format(name,arr.size))
 6     print("数组{}的维度:{}".format(name,arr.shape))
 7     print("数组{}的维度:{}".format(name,arr.ndim))
 8     print("数组{}元素的数据类型:{}".format(name,arr.dtype))
 9     #print("数组:{}".format(arr.data))
10     
11 a = np.floor(10*np.random.random((3,4)))
12 print(a)
13 log_type(''a'',a)
14 
15 a1 = a.flatten()
16 print("修改前a1:{}".format(a1))
17 log_type(''a1'',a1)
18 a1[2] = 100
19 print("修改后a1:{}".format(a1))
20 
21 print("a:{}".format(a))
22 log_type(''a'',a)

 

squeeze()示例:

1. 没有single-dimensional entries的情况

 1 import matplotlib.pyplot as plt
 2 import numpy as np
 3 
 4 def log_type(name,arr):
 5     print("数组{}的大小:{}".format(name,arr.size))
 6     print("数组{}的维度:{}".format(name,arr.shape))
 7     print("数组{}的维度:{}".format(name,arr.ndim))
 8     print("数组{}元素的数据类型:{}".format(name,arr.dtype))
 9     #print("数组:{}".format(arr.data))
10     
11 a = np.floor(10*np.random.random((3,4)))
12 print(a)
13 log_type(''a'',a)
14 
15 a1 = a.squeeze()
16 print("修改前a1:{}".format(a1))
17 log_type(''a1'',a1)
18 a1[2] = 100
19 print("修改后a1:{}".format(a1))
20 
21 print("a:{}".format(a))
22 log_type(''a'',a)

从结果中可以看到,当没有single-dimensional entries时,squeeze()返回额数组对象是一个view,而不是copy。

 

2. 有single-dimentional entries 的情况

 1 import matplotlib.pyplot as plt
 2 import numpy as np
 3 
 4 def log_type(name,arr):
 5     print("数组{}的大小:{}".format(name,arr.size))
 6     print("数组{}的维度:{}".format(name,arr.shape))
 7     print("数组{}的维度:{}".format(name,arr.ndim))
 8     print("数组{}元素的数据类型:{}".format(name,arr.dtype))
 9     #print("数组:{}".format(arr.data))
10 
11 a = np.floor(10*np.random.random((1,3,4)))
12 print(a)
13 log_type(''a'',a)
14 
15 a1 = a.squeeze()
16 print("修改前a1:{}".format(a1))
17 log_type(''a1'',a1)
18 a1[2] = 100
19 print("修改后a1:{}".format(a1))
20 
21 print("a:{}".format(a))
22 log_type(''a'',a)

 

Numpy:数组创建 numpy.arrray() , numpy.arange()、np.linspace ()、数组基本属性

Numpy:数组创建 numpy.arrray() , numpy.arange()、np.linspace ()、数组基本属性

一、Numpy数组创建

 part 1:np.linspace(起始值,终止值,元素总个数

 

import numpy as np
''''''
numpy中的ndarray数组
''''''

ary = np.array([1, 2, 3, 4, 5])
print(ary)
ary = ary * 10
print(ary)

''''''
ndarray对象的创建
''''''
# 创建二维数组
# np.array([[],[],...])
a = np.array([[1, 2, 3, 4], [5, 6, 7, 8]])
print(a)

# np.arange(起始值, 结束值, 步长(默认1))
b = np.arange(1, 10, 1)
print(b)

print("-------------np.zeros(数组元素个数, dtype=''数组元素类型'')-----")
# 创建一维数组:
c = np.zeros(10)
print(c, ''; c.dtype:'', c.dtype)

# 创建二维数组:
print(np.zeros ((3,4)))

print("----------np.ones(数组元素个数, dtype=''数组元素类型'')--------")
# 创建一维数组:
d = np.ones(10, dtype=''int64'')
print(d, ''; d.dtype:'', d.dtype)

# 创建三维数组:
print(np.ones( (2,3,4), dtype=np.int32 ))
# 打印维度
print(np.ones( (2,3,4), dtype=np.int32 ).ndim)  # 返回:3(维)

 

结果图:

 

part 2 :np.linspace ( 起始值,终止值,元素总个数)

 

import numpy as np
a = np.arange( 10, 30, 5 )

b = np.arange( 0, 2, 0.3 )

c = np.arange(12).reshape(4,3)

d = np.random.random((2,3))  # 取-1到1之间的随机数,要求设置为诶2行3列的结构

print(a)
print(b)
print(c)
print(d)

print("-----------------")
from numpy import pi
print(np.linspace( 0, 2*pi, 100 ))

print("-------------np.linspace(起始值,终止值,元素总个数)------------------")
print(np.sin(np.linspace( 0, 2*pi, 100 )))

 

结果图:

 

 

 

 

二、Numpy的ndarray对象属性:

数组的结构:array.shape

数组的维度:array.ndim

元素的类型:array.dtype

数组元素的个数:array.size

数组的索引(下标):array[0]

 

''''''
数组的基本属性
''''''
import numpy as np

print("--------------------案例1:------------------------------")
a = np.arange(15).reshape(3, 5)
print(a)
print(a.shape)     # 打印数组结构
print(len(a))      # 打印有多少行
print(a.ndim)     # 打印维度
print(a.dtype)    # 打印a数组内的元素的数据类型
# print(a.dtype.name)
print(a.size)    # 打印数组的总元素个数


print("-------------------案例2:---------------------------")
a = np.array([[1, 2, 3], [4, 5, 6]])
print(a)

# 测试数组的基本属性
print(''a.shape:'', a.shape)
print(''a.size:'', a.size)
print(''len(a):'', len(a))
# a.shape = (6, )  # 此格式可将原数组结构变成1行6列的数据结构
# print(a, ''a.shape:'', a.shape)

# 数组元素的索引
ary = np.arange(1, 28)
ary.shape = (3, 3, 3)   # 创建三维数组
print("ary.shape:",ary.shape,"\n",ary )

print("-----------------")
print(''ary[0]:'', ary[0])
print(''ary[0][0]:'', ary[0][0])
print(''ary[0][0][0]:'', ary[0][0][0])
print(''ary[0,0,0]:'', ary[0, 0, 0])

print("-----------------")


# 遍历三维数组:遍历出数组里的每个元素
for i in range(ary.shape[0]):
    for j in range(ary.shape[1]):
        for k in range(ary.shape[2]):
            print(ary[i, j, k], end='' '')
            

 

结果图:

 

关于Python numpy 模块-promote_types() 实例源码python中numpy模块的介绍已经告一段落,感谢您的耐心阅读,如果想了解更多关于Jupyter 中的 Numpy 在打印时出错(Python 版本 3.8.8):TypeError: 'numpy.ndarray' object is not callable、numpy.random.random & numpy.ndarray.astype & numpy.arange、numpy.ravel()/numpy.flatten()/numpy.squeeze()、Numpy:数组创建 numpy.arrray() , numpy.arange()、np.linspace ()、数组基本属性的相关信息,请在本站寻找。

本文标签: