GVKun编程网logo

Python numpy 模块-ufunc() 实例源码(python中numpy模块)

5

在这里,我们将给大家分享关于Pythonnumpy模块-ufunc()实例源码的知识,让您更了解python中numpy模块的本质,同时也会涉及到如何更有效地AttributeError:Float'

在这里,我们将给大家分享关于Python numpy 模块-ufunc() 实例源码的知识,让您更了解python中numpy模块的本质,同时也会涉及到如何更有效地AttributeError:Float' 对象没有属性日志 /TypeError: ufunc 'log' 不支持输入类型、matplotlib 错误:ufunc 循环不支持 float 类型的参数 0,该参数没有可调用的 rint 方法、NetCDF Python 无法将 ufunc 'multiply' 输出从 dtype('的内容。

本文目录一览:

Python numpy 模块-ufunc() 实例源码(python中numpy模块)

Python numpy 模块-ufunc() 实例源码(python中numpy模块)

Python numpy 模块,ufunc() 实例源码

我们从Python开源项目中,提取了以下19个代码示例,用于说明如何使用numpy.ufunc()

项目:lazyarray    作者:NeuralEnsemble    | 项目源码 | 文件源码
  1. def __deepcopy__(self, memo):
  2. obj = type(self).__new__(type(self))
  3. if isinstance(self.base_value, VectorizedIterable): # special case,but perhaps need to rethink
  4. obj.base_value = self.base_value # whether deepcopy is appropriate everywhere
  5. else:
  6. try:
  7. obj.base_value = deepcopy(self.base_value)
  8. except TypeError: # base_value cannot be copied,e.g. is a generator (but see generator_tools from PyPI)
  9. obj.base_value = self.base_value # so here we create a reference rather than deepcopying - Could cause problems
  10. obj._shape = self._shape
  11. obj.dtype = self.dtype
  12. obj.operations = []
  13. for f, arg in self.operations:
  14. if isinstance(f, numpy.ufunc):
  15. obj.operations.append((f, deepcopy(arg)))
  16. else:
  17. obj.operations.append((deepcopy(f), deepcopy(arg)))
  18. return obj
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def __array_prepare__(self, result, context=None):
  2. """
  3. Gets called prior to a ufunc
  4. """
  5.  
  6. # nice error message for non-ufunc types
  7. if context is not None and not isinstance(self._values, np.ndarray):
  8. obj = context[1][0]
  9. raise TypeError("{obj} with dtype {dtype} cannot perform "
  10. "the numpy op {op}".format(
  11. obj=type(obj).__name__,
  12. dtype=getattr(obj, ''dtype'', None),
  13. op=context[0].__name__))
  14. return result
  15.  
  16. # complex
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def __array_wrap__(self, context=None):
  2. """
  3. Gets called after a ufunc. Needs additional handling as
  4. Periodindex stores internal data as int dtype
  5.  
  6. Replace this to __numpy_ufunc__ in future version
  7. """
  8. if isinstance(context, tuple) and len(context) > 0:
  9. func = context[0]
  10. if (func is np.add):
  11. return self._add_delta(context[1][1])
  12. elif (func is np.subtract):
  13. return self._add_delta(-context[1][1])
  14. elif isinstance(func, np.ufunc):
  15. if ''M->M'' not in func.types:
  16. msg = "ufunc ''{0}'' not supported for the Periodindex"
  17. # This should be TypeError,but TypeError cannot be raised
  18. # from here because numpy catches.
  19. raise ValueError(msg.format(func.__name__))
  20.  
  21. if com.is_bool_dtype(result):
  22. return result
  23. return Periodindex(result, freq=self.freq, name=self.name)
项目:Theano-Deep-learning    作者:GeekLiB    | 项目源码 | 文件源码
  1. def __init__(self, scalar_op, inplace_pattern=None, name=None,
  2. nfunc_spec=None, openmp=None):
  3. if inplace_pattern is None:
  4. inplace_pattern = {}
  5. self.name = name
  6. self.scalar_op = scalar_op
  7. self.inplace_pattern = inplace_pattern
  8. self.destroy_map = dict((o, [i]) for o, i in inplace_pattern.items())
  9.  
  10. self.ufunc = None
  11. self.nfunc = None
  12. if nfunc_spec is None:
  13. nfunc_spec = getattr(scalar_op, ''nfunc_spec'', None)
  14. self.nfunc_spec = nfunc_spec
  15. if nfunc_spec:
  16. self.nfunc = getattr(numpy, nfunc_spec[0])
  17.  
  18. # precompute the hash of this node
  19. self._rehash()
  20. super(Elemwise, self).__init__(openmp=openmp)
项目:Theano-Deep-learning    作者:GeekLiB    | 项目源码 | 文件源码
  1. def set_ufunc(self, scalar_op):
  2. # This is probably a speed up of the implementation
  3. if isinstance(scalar_op, theano.scalar.basic.Add):
  4. self.ufunc = numpy.add
  5. elif isinstance(scalar_op, theano.scalar.basic.Mul):
  6. self.ufunc = numpy.multiply
  7. elif isinstance(scalar_op, theano.scalar.basic.Maximum):
  8. self.ufunc = numpy.maximum
  9. elif isinstance(scalar_op, theano.scalar.basic.Minimum):
  10. self.ufunc = numpy.minimum
  11. elif isinstance(scalar_op, theano.scalar.basic.AND):
  12. self.ufunc = numpy.bitwise_and
  13. elif isinstance(scalar_op, theano.scalar.basic.OR):
  14. self.ufunc = numpy.bitwise_or
  15. elif isinstance(scalar_op, theano.scalar.basic.XOR):
  16. self.ufunc = numpy.bitwise_xor
  17. else:
  18. self.ufunc = numpy.frompyfunc(scalar_op.impl, 2, 1)
项目:npstreams    作者:LaurentRDC    | 项目源码 | 文件源码
  1. def _check_binary_ufunc(ufunc):
  2. """ Check that ufunc is suitable for ``ireduce_ufunc`` """
  3. if not isinstance(ufunc, np.ufunc):
  4. raise TypeError(''{} is not a NumPy Ufunc''.format(ufunc.__name__))
  5. if not ufunc.nin == 2:
  6. raise ValueError(''Only binary ufuncs are supported,and {} is \\
  7. not one of them''.format(ufunc.__name__))
  8.  
  9. # Ufuncs that always return bool are problematic because they can be reduced
  10. # but not be accumulated.
  11. # Recall: numpy.dtype(''?'') == np.bool
  12. if all(type_signature[-1] == ''?'' for type_signature in ufunc.types):
  13. raise ValueError(''Only binary ufuncs that preserve type are supported,\\
  14. and {} is not one of them''.format(ufunc.__name__))
项目:npstreams    作者:LaurentRDC    | 项目源码 | 文件源码
  1. def _ireduce_ufunc_new_axis(arrays, ufunc, **kwargs):
  2. """
  3. Reduction operation for arrays,in the direction of a new axis (i.e. stacking).
  4.  
  5. Parameters
  6. ----------
  7. arrays : iterable
  8. Arrays to be reduced.
  9. ufunc : numpy.ufunc
  10. Binary universal function. Must have a signature of the form ufunc(x1,x2,...)
  11. kwargs
  12. Keyword arguments are passed to ``ufunc``.
  13.  
  14. Yields
  15. ------
  16. reduced : ndarray
  17. """
  18. arrays = iter(arrays)
  19. first = next(arrays)
  20.  
  21. kwargs.pop(''axis'')
  22.  
  23. dtype = kwargs.get(''dtype'', None)
  24. if dtype is None:
  25. dtype = first.dtype
  26. else:
  27. kwargs[''casting''] = ''unsafe''
  28.  
  29. # If the out parameter was already given
  30. # we create the accumulator from it
  31. # Otherwise,it is a copy of the first array
  32. accumulator = kwargs.pop(''out'', None)
  33. if accumulator is not None:
  34. accumulator[:] = first
  35. else:
  36. accumulator = np.array(first, copy = True).astype(dtype)
  37. yield accumulator
  38.  
  39. for array in arrays:
  40. ufunc(accumulator, array, out = accumulator, **kwargs)
  41. yield accumulator
项目:npstreams    作者:LaurentRDC    | 项目源码 | 文件源码
  1. def _ireduce_ufunc_all_axes(arrays,over all axes.
  2.  
  3. Parameters
  4. ----------
  5. arrays : iterable
  6. Arrays to be reduced.
  7. ufunc : numpy.ufunc
  8. Binary universal function. Must have a signature of the form ufunc(x1,...)
  9. kwargs
  10. Keyword arguments are passed to ``ufunc``. The ``out`` parameter is ignored.
  11.  
  12. Yields
  13. ------
  14. reduced : scalar
  15. """
  16. arrays = iter(arrays)
  17. first = next(arrays)
  18.  
  19. kwargs[''axis''] = None
  20. kwargs.pop(''out'', None) # Remove the out-parameter if provided.
  21. axis_reduce = partial(ufunc.reduce, **kwargs)
  22.  
  23. accumulator = axis_reduce(first)
  24. yield accumulator
  25.  
  26. for array in arrays:
  27. accumulator = axis_reduce([accumulator, axis_reduce(array)])
  28. yield accumulator
项目:lombscargle    作者:jakevdp    | 项目源码 | 文件源码
  1. def _validate_method(method, dy, fit_bias, nterms,
  2. frequency, assume_regular_frequency):
  3. fast_method_ok = hasattr(np.ufunc, ''at'')
  4. if not fast_method_ok:
  5. warnings.warn("Fast Lomb-Scargle methods require numpy version 1.8 "
  6. "or newer. Using slower methods instead.")
  7.  
  8. # automatically choose the appropiate method
  9. if method == ''auto'':
  10. if nterms != 1:
  11. if (fast_method_ok and len(frequency) > 100
  12. and _is_regular(frequency, assume_regular_frequency)):
  13. method = ''fastchi2''
  14. else:
  15. method = ''chi2''
  16. elif (fast_method_ok and len(frequency) > 100
  17. and _is_regular(frequency, assume_regular_frequency)):
  18. method = ''fast''
  19. elif dy is None and not fit_bias:
  20. method = ''scipy''
  21. else:
  22. method = ''slow''
  23.  
  24.  
  25. if method not in METHODS:
  26. raise ValueError("invalid method: {0}".format(method))
  27.  
  28. return method
项目:lazyarray    作者:NeuralEnsemble    | 项目源码 | 文件源码
  1. def _build_ufunc(func):
  2. """Return a ufunc that works with lazy arrays"""
  3. def larray_compatible_ufunc(x):
  4. if isinstance(x, larray):
  5. y = deepcopy(x)
  6. y.apply(func)
  7. return y
  8. else:
  9. return func(x)
  10. return larray_compatible_ufunc
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def __array_wrap__(self, context=None):
  2. """
  3. Gets called after a ufunc
  4. """
  5. return self._constructor(result, index=self.index,
  6. copy=False).__finalize__(self)
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def apply(self, func, axis=0, broadcast=False, reduce=False):
  2. """
  3. Analogous to DataFrame.apply,for SparseDataFrame
  4.  
  5. Parameters
  6. ----------
  7. func : function
  8. Function to apply to each column
  9. axis : {0,1,''index'',''columns''}
  10. broadcast : bool,default False
  11. For aggregation functions,return object of same size with values
  12. propagated
  13.  
  14. Returns
  15. -------
  16. applied : Series or SparseDataFrame
  17. """
  18. if not len(self.columns):
  19. return self
  20. axis = self._get_axis_number(axis)
  21.  
  22. if isinstance(func, np.ufunc):
  23. new_series = {}
  24. for k, v in compat.iteritems(self):
  25. applied = func(v)
  26. applied.fill_value = func(applied.fill_value)
  27. new_series[k] = applied
  28. return self._constructor(
  29. new_series, columns=self.columns,
  30. default_fill_value=self._default_fill_value,
  31. kind=self._default_kind).__finalize__(self)
  32. else:
  33. if not broadcast:
  34. return self._apply_standard(func, axis, reduce=reduce)
  35. else:
  36. return self._apply_broadcast(func, axis)
项目:Theano-Deep-learning    作者:GeekLiB    | 项目源码 | 文件源码
  1. def __getstate__(self):
  2. d = copy(self.__dict__)
  3. d.pop(''ufunc'')
  4. d.pop(''nfunc'')
  5. d.pop(''__epydoc_asRoutine'', None)
  6. d.pop(''_hashval'')
  7. return d
项目:Theano-Deep-learning    作者:GeekLiB    | 项目源码 | 文件源码
  1. def __setstate__(self, d):
  2. super(Elemwise, self).__setstate__(d)
  3. self.ufunc = None
  4. self.nfunc = None
  5. if getattr(self, None):
  6. self.nfunc = getattr(numpy, self.nfunc_spec[0])
  7. elif 0 < self.scalar_op.nin < 32:
  8. self.ufunc = numpy.frompyfunc(self.scalar_op.impl,
  9. self.scalar_op.nin,
  10. self.scalar_op.nout)
  11. self._rehash()
项目:npstreams    作者:LaurentRDC    | 项目源码 | 文件源码
  1. def reduce_ufunc(*args, **kwargs):
  2. """
  3. Streaming reduction generator function from a binary NumPy ufunc. Essentially the
  4. function equivalent to `ireduce_ufunc`.
  5.  
  6. ``ufunc`` must be a NumPy binary Ufunc (i.e. it takes two arguments). Moreover,
  7. for performance reasons,ufunc must have the same return types as input types.
  8. This precludes the use of ``numpy.greater``,for example.
  9.  
  10. Note that performance is much better for the default ``axis = -1``. In such a case,
  11. reduction operations can occur in-place. This also allows to operate in constant-memory.
  12.  
  13. Parameters
  14. ----------
  15. arrays : iterable
  16. Arrays to be reduced.
  17. ufunc : numpy.ufunc
  18. Binary universal function.
  19. axis : int or None,optional
  20. Reduction axis. Default is to reduce the arrays in the stream as if
  21. they had been stacked along a new axis,then reduce along this new axis.
  22. If None,arrays are flattened before reduction. If `axis` is an int larger that
  23. the number of dimensions in the arrays of the stream,arrays are reduced
  24. along the new axis. Note that not all of NumPy Ufuncs support
  25. ``axis = None``,e.g. ``numpy.subtract``.
  26. dtype : numpy.dtype or None,optional
  27. Overrides the dtype of the calculation and output arrays.
  28. ignore_nan : bool,optional
  29. If True and ufunc has an identity value (e.g. ``numpy.add.identity`` is 0),then NaNs
  30. are replaced with this identity. An error is raised if ``ufunc`` has no identity (e.g. ``numpy.maximum.identity`` is ``None``).
  31. kwargs
  32. Keyword arguments are passed to ``ufunc``. Note that some valid ufunc keyword arguments
  33. (e.g. ``keepdims``) are not valid for all streaming functions. Note that
  34. contrary to NumPy v. 1.10+,``casting = ''unsafe`` is the default in npstreams.
  35.  
  36. Yields
  37. ------
  38. reduced : ndarray or scalar
  39.  
  40. Raises
  41. ------
  42. TypeError : if ``ufunc`` is not NumPy ufunc.
  43. ValueError : if ``ignore_nan`` is True but ``ufunc`` has no identity
  44. ValueError: if ``ufunc`` is not a binary ufunc
  45. ValueError: if ``ufunc`` does not have the same input type as output type
  46. """
  47. return last(ireduce_ufunc(*args, **kwargs))
项目:npstreams    作者:LaurentRDC    | 项目源码 | 文件源码
  1. def _ireduce_ufunc_existing_axis(arrays,in the direction of an existing axis.
  2.  
  3. Parameters
  4. ----------
  5. arrays : iterable
  6. Arrays to be reduced.
  7. ufunc : numpy.ufunc
  8. Binary universal function. Must have a signature of the form ufunc(x1,...)
  9. kwargs
  10. Keyword arguments are passed to ``ufunc``. The ``out`` parameter is ignored.
  11.  
  12. Yields
  13. ------
  14. reduced : ndarray
  15. """
  16. arrays = iter(arrays)
  17. first = next(arrays)
  18.  
  19. if kwargs[''axis''] not in range(first.ndim):
  20. raise ValueError(''Axis {} not supported on arrays of shape {}.''.format(kwargs[''axis''], first.shape))
  21.  
  22. # Remove the out-parameter if provided.
  23. kwargs.pop(''out'', None)
  24.  
  25. dtype = kwargs.get(''dtype'')
  26. if dtype is None:
  27. dtype = first.dtype
  28.  
  29. axis_reduce = partial(ufunc.reduce, **kwargs)
  30.  
  31. accumulator = np.atleast_1d(axis_reduce(first))
  32. yield accumulator
  33.  
  34. # On the first pass of the following loop,accumulator is missing a dimensions
  35. # therefore,the stacking function cannot be ''concatenate''
  36. second = next(arrays)
  37. accumulator = np.stack([accumulator, np.atleast_1d(axis_reduce(second))], axis = -1)
  38. yield accumulator
  39.  
  40. # On the second pass,the new dimensions exists,and thus we switch to
  41. # using concatenate.
  42. for array in arrays:
  43. reduced = np.expand_dims(np.atleast_1d(axis_reduce(array)), axis = accumulator.ndim - 1)
  44. accumulator = np.concatenate([accumulator, reduced], axis = accumulator.ndim - 1)
  45. yield accumulator
项目:plotnine    作者:has2k1    | 项目源码 | 文件源码
  1. def compute_group(cls, data, scales, **params):
  2. fun = params[''fun'']
  3. n = params[''n'']
  4. args = params[''args'']
  5. xlim = params[''xlim'']
  6.  
  7. try:
  8. range_x = xlim or scales.x.dimension((0, 0))
  9. except AttributeError:
  10. raise PlotnineError(
  11. "Missing ''x'' aesthetic and ''xlim'' is {}".format(xlim))
  12.  
  13. if not hasattr(fun, ''__call__''):
  14. raise PlotnineError(
  15. "stat_function requires parameter ''fun'' to be " +
  16. "a function or any other callable object")
  17.  
  18. old_fun = fun
  19. if isinstance(args, (list, tuple)):
  20. def fun(x):
  21. return old_fun(x, *args)
  22. elif isinstance(args, dict):
  23. def fun(x):
  24. return old_fun(x, **args)
  25. elif args is not None:
  26. def fun(x):
  27. return old_fun(x, args)
  28. else:
  29. def fun(x):
  30. return old_fun(x)
  31.  
  32. x = np.linspace(range_x[0], range_x[1], n)
  33.  
  34. # continuous scale
  35. with suppress(AttributeError):
  36. x = scales.x.trans.inverse(x)
  37.  
  38. # We kNow these can handle array-likes
  39. if isinstance(old_fun, (np.ufunc, np.vectorize)):
  40. y = fun(x)
  41. else:
  42. y = [fun(val) for val in x]
  43.  
  44. new_data = pd.DataFrame({''x'': x, ''y'': y})
  45. return new_data
项目:Theano-Deep-learning    作者:GeekLiB    | 项目源码 | 文件源码
  1. def prepare_node(self, node, storage_map, compute_map, impl):
  2. # Postpone the ufunc building to the last minutes
  3. # NumPy ufunc support only up to 31 inputs.
  4. # But our c code support more.
  5. if (len(node.inputs) < 32 and
  6. (self.nfunc is None or
  7. self.scalar_op.nin != len(node.inputs)) and
  8. self.ufunc is None and
  9. impl == ''py''):
  10.  
  11. ufunc = numpy.frompyfunc(self.scalar_op.impl,
  12. len(node.inputs),
  13. self.scalar_op.nout)
  14. if self.scalar_op.nin > 0:
  15. # We can reuse it for many nodes
  16. self.ufunc = ufunc
  17. else:
  18. node.tag.ufunc = ufunc
  19.  
  20. # Numpy ufuncs will sometimes perform operations in
  21. # float16,in particular when the input is int8.
  22. # This is not something that we want,and we do not
  23. # do it in the C code,so we specify that the computation
  24. # should be carried out in the returned dtype.
  25. # This is done via the "sig" kwarg of the ufunc,its value
  26. # should be something like "ff->f",where the characters
  27. # represent the dtype of the inputs and outputs.
  28.  
  29. # NumPy 1.10.1 raise an error when giving the signature
  30. # when the input is complex. So add it only when inputs is int.
  31. out_dtype = node.outputs[0].dtype
  32. if (out_dtype in float_dtypes and
  33. isinstance(self.nfunc, numpy.ufunc) and
  34. node.inputs[0].dtype in discrete_dtypes):
  35. char = numpy.sctype2char(out_dtype)
  36. sig = char * node.nin + ''->'' + char * node.nout
  37. node.tag.sig = sig
  38. node.tag.fake_node = Apply(
  39. self.scalar_op,
  40. [get_scalar_type(dtype=input.type.dtype).make_variable()
  41. for input in node.inputs],
  42. [get_scalar_type(dtype=output.type.dtype).make_variable()
  43. for output in node.outputs])
  44.  
  45. self.scalar_op.prepare_node(node.tag.fake_node, None, impl)
项目:Theano-Deep-learning    作者:GeekLiB    | 项目源码 | 文件源码
  1. def perform(self, inp, out):
  2. input, = inp
  3. output, = out
  4. axis = self.axis
  5. if axis is None:
  6. axis = list(range(input.ndim))
  7. variable = input
  8. to_reduce = reversed(sorted(axis))
  9.  
  10. if hasattr(self, ''acc_dtype'') and self.acc_dtype is not None:
  11. acc_dtype = self.acc_dtype
  12. else:
  13. acc_dtype = node.outputs[0].type.dtype
  14.  
  15. if to_reduce:
  16. for dimension in to_reduce:
  17. # If it''s a zero-size array,use scalar_op.identity
  18. # if available
  19. if variable.shape[dimension] == 0:
  20. if hasattr(self.scalar_op, ''identity''):
  21. # Compute the shape of the output
  22. v_shape = list(variable.shape)
  23. del v_shape[dimension]
  24. variable = numpy.empty(tuple(v_shape),
  25. dtype=acc_dtype)
  26. variable.fill(self.scalar_op.identity)
  27. else:
  28. raise ValueError((
  29. "Input (%s) has zero-size on axis %s,but "
  30. "self.scalar_op (%s) has no attribute ''identity''"
  31. % (variable, dimension, self.scalar_op)))
  32. else:
  33. # Numpy 1.6 has a bug where you sometimes have to specify
  34. # "dtype=''object''" in reduce for it to work,if the ufunc
  35. # was built with "frompyfunc". We need to find out if we
  36. # are in one of these cases (only "object" is supported in
  37. # the output).
  38. if ((self.ufunc.ntypes == 1) and
  39. (self.ufunc.types[0][-1] == ''O'')):
  40. variable = self.ufunc.reduce(variable,
  41. dtype=''object'')
  42. else:
  43. variable = self.ufunc.reduce(variable,
  44. dtype=acc_dtype)
  45.  
  46. variable = numpy.asarray(variable)
  47. if numpy.may_share_memory(variable, input):
  48. # perhaps numpy is cLever for reductions of size 1?
  49. # We don''t want this.
  50. variable = variable.copy()
  51. output[0] = theano._asarray(variable,
  52. dtype=node.outputs[0].type.dtype)
  53. else:
  54. # Force a copy
  55. output[0] = numpy.array(variable, copy=True,
  56. dtype=node.outputs[0].type.dtype)

AttributeError:Float' 对象没有属性日志 /TypeError: ufunc 'log' 不支持输入类型

AttributeError:Float' 对象没有属性日志 /TypeError: ufunc 'log' 不支持输入类型

如何解决AttributeError:Float'' 对象没有属性日志 /TypeError: ufunc ''log'' 不支持输入类型

我在一列(''2.4M'')中有一系列荧光强度数据。我试图通过获取列“2.4M”的 ln 创建一个新列“ln_2.4M”,但出现错误:

AttributeError: ''float'' 对象没有属性 ''log''

  1. df["ln_2.4M"] = np.log(df["2.4M"])

我尝试使用 for 循环遍历“2.4M”列中每个荧光数据的日志:

  1. ln2_4M = []
  2. for x in df["2.4M"]:
  3. ln2_4M = np.log(x)
  4. print(ln2_4M)

尽管它正确地将 ln2_4M 打印为“2.4M”列的日志,但我无法使用该数据,因为它与 TypeError 一起给出: 输入类型不支持 ufunc ''log'',并且无法根据转换规则 ''''safe'' 将输入安全地强制转换为任何受支持的类型

不知道为什么? - 任何有助于了解正在发生的事情以及如何解决此问题的帮助表示赞赏。谢谢

解决方法

.
然后我尝试使用下面的方法,它奏效了:

  1. df["2.4M"] = pd.to_numeric(df["2.4M"],errors = ''coerce'')
  2. df["ln_24M"] = np.log(df["2.4M"])

matplotlib 错误:ufunc 循环不支持 float 类型的参数 0,该参数没有可调用的 rint 方法

matplotlib 错误:ufunc 循环不支持 float 类型的参数 0,该参数没有可调用的 rint 方法

如何解决matplotlib 错误:ufunc 循环不支持 float 类型的参数 0,该参数没有可调用的 rint 方法?

这是我的数据系列: df =

        count
17    83396.142857
18    35970.000000
19    54082.428571
20    21759.714286
21    16899.571429
22    19870.571429
23    32491.285714
24    40425.285714
25    30780.285714
26    11923.428571
27    13698.571429
28    28028.000000
29    52575.000000

首先将其转换为 int 以避免出现任何问题:

df[''count''] = df[''count''].astype(int)
df.index = df.index.astype(int)

我正在尝试使用以下方法绘制数据:

    _,ax = plt.subplots(1,2)
    df.plot.pie(ax = ax[1],y = df[''count''])
    plt.show()

但它一直抛出异常错误:

Type:
  TypeError
Message:
  loop of ufunc does not support argument 0 of type float which has no callable rint method
Stacktrace:
  File "/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/matplotlib/backends/backend_macosx.py",line 61,in _draw
    self.figure.draw(renderer)
  File "/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/matplotlib/artist.py",line 41,in draw_wrapper
    return draw(artist,renderer,*args,**kwargs)
  File "/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/matplotlib/figure.py",line 1863,in draw
    mimage._draw_list_compositing_images(
  File "/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/matplotlib/image.py",line 131,in _draw_list_compositing_images
    a.draw(renderer)
  File "/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/matplotlib/artist.py",**kwargs)
  File "/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/matplotlib/cbook/deprecation.py",line 411,in wrapper
    return func(*inner_args,**inner_kwargs)
  File "/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/matplotlib/axes/_base.py",line 2747,in draw
    mimage._draw_list_compositing_images(renderer,self,artists)
  File "/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/matplotlib/image.py",**kwargs)
  File "/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/matplotlib/axis.py",line 1164,in draw
    ticks_to_draw = self._update_ticks()
  File "/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/matplotlib/axis.py",line 1022,in _update_ticks
    major_labels = self.major.formatter.format_ticks(major_locs)
  File "/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/matplotlib/ticker.py",line 249,in format_ticks
    self.set_locs(values)
  File "/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/matplotlib/ticker.py",line 782,in set_locs
    self._set_format()
  File "/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/matplotlib/ticker.py",line 884,in _set_format
    if np.abs(locs - np.round(locs,decimals=sigfigs)).max() < thresh:
  File "<__array_function__ internals>",line 5,in round_
  File "/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/numpy/core/fromnumeric.py",line 3739,in round_
    return around(a,decimals=decimals,out=out)
  File "<__array_function__ internals>",in around
  File "/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/numpy/core/fromnumeric.py",line 3314,in around
    return _wrapfunc(a,''round'',out=out)
  File "/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/numpy/core/fromnumeric.py",line 66,in _wrapfunc
    return _wrapit(obj,method,**kwds)
  File "/Users/eyshikaagarwal/.virtualenvs/env-hss-ml/lib/python3.8/site-packages/numpy/core/fromnumeric.py",line 43,in _wrapit
    result = getattr(asarray(obj),method)(*args,**kwds)
[0m

任何建议..这里有什么问题? 我已经花了几个小时来理解和修复它,但还没有运气。 任何帮助都会很棒。

更新:

感谢@ehsan 为饼图提供的答案,但是当我使用以下方法进行简单的折线图时,我仍然遇到相同的错误:

plot_kwargs = {''xticks'': df.index,''grid'': True,''color'': ''Red'',''title'' : "Average "}

df.plot(ylabel = ''Average No. of tracks '',**plot_kwargs)

这与我使用此代码遇到的错误完全相同,我不明白为什么。我什至在这里也使用了 y=''count'' ,只是为了看看是否有任何变化,但它的错误是相同的。 任何见解都会有所帮助 谢谢!

解决方法

你想要这个:

_,ax = plt.subplots(1,2)
df.plot.pie(ax = ax[1],y = ''count'')
plt.show()

错误是您使用了 y=df[''count''] 而不是简单的 y=''count''。您正在使用熊猫绘图,无需发送列值,只需发送列名。此外,您不需要将 dtype 转换为 int,除非您想这样做。

输出:

enter image description here

NetCDF Python 无法将 ufunc 'multiply' 输出从 dtype('<U32') 转换为 dtype('float32')

NetCDF Python 无法将 ufunc 'multiply' 输出从 dtype('

如何解决NetCDF Python 无法将 ufunc ''multiply'' 输出从 dtype(''<U32'') 转换为 dtype(''float32'')

我正在尝试使用 xarray 或 netCDF4 库将 netCDF 加载到数据帧中。通常这不会成为问题,因为我的 netCDF 大部分带有 Float32 中的纬度、经度和数据值。我假设我的错误是因为我有一些数据类型被作为 Float64 传递。

我目前在加载时从两个库中收到相同的错误,大概是因为它们都使用了 numpy。我没有做任何数学运算 - 只是加载。

  1. numpy.core._exceptions.UFuncTypeError: Cannot cast ufunc ''multiply'' output from dtype(''<U32'')
  2. to dtype(''float32'') with casting rule ''same_kind''

使用 print(netCDF4.Dataset("d:\\netdcdf.nc") 产生以下描述:

  1. dimensions(sizes): time(1),lon(841),lat(681)
  2. variables(dimensions): float64 time(time),float64 lon(lon),float64 lat(lat),int32 crs(),float32 deadpool(time,lat,lon)

我的脚本如下,包括 xarray 和 netCDF4 的加载示例。

  1. #This file is designed to convert netcdf files to the BOM standard format.
  2. import netCDF4
  3. import pandas as pd
  4. import xarray as xr
  5. def main():
  6. pass
  7. if __name__ == ''__main__'':
  8. inputfile = ''D:\\\\Temp\\\\WeatherDownloads\\\\Weather\\\\deadpool.aus.nc''
  9. #xarray setup,debug and load
  10. ncx = xr.open_dataset(inputfile)
  11. ncdf = ncx.deadpool.to_dataframe() #fails here if we use xarray
  12. print(ncdf.head(10))
  13. #NetCDF4 setup,debug and load
  14. nc = netCDF4.Dataset(inputfile,mode=''r'')
  15. nc.variables.keys()
  16. lat = nc.variables[''lat''][:]
  17. lon = nc.variables[''lon''][:]
  18. time = nc.variables[''time'']
  19. datavar = nc.variables[''deadpool''][:] #fails here if we use netCDF4
  20. print("The dtype of lat is: " + str(dtype(lat)))
  21. print("The dtype of lon is: " + str(dtype(lon)))
  22. print("The dtype of time is: " + str(dtype(time)))
  23. print("The dtype of datavar is: " + str(dtype(datavar)))
  24. data_ts = pd.Series(datavar,index=time)
  25. print(data_ts.head(10))

Numpy ufunc.reduce 比在 ndarray.tolist 之后应用原生 python reduce 慢?

Numpy ufunc.reduce 比在 ndarray.tolist 之后应用原生 python reduce 慢?

如何解决Numpy ufunc.reduce 比在 ndarray.tolist 之后应用原生 python reduce 慢?

在玩 Python 列表时,我喜欢使用许多函数式编程特性。当我为大数据集切换到 Numpy 时,我希望它比 ndarray.tolist() 上的原生 Python 列表操作更高效,因为它的存储方式不同。

因此,当我尝试在 Numpy 数组上应用 mapreducefilter 这样的 FP 事物时,我首先在 Numpy 的文档中搜索一些“优化的事物”。我得到的是 numpy.ufunc.reduce 这似乎是正确的事情。但是,出于好奇,我对这两种方法都做了一个简单的测试:

  1. 使用 Numpy 减少
  1. import numpy as np
  2. a = np.array(range(100000000))
  3. adf = lambda res,a: res + a
  4. u_adf = np.frompyfunc(adf,2,1)
  5. print(u_adf.reduce(a,initial=0))
  1. 使用ndarray.tolist(),然后使用Python native reduce
  1. import numpy as np
  2. from functools import reduce
  3. a = np.array(range(100000000))
  4. adf = lambda res,a: res + a
  5. print(reduce(adf,a.tolist(),0))

最意想不到的事情来了:

  1. > python 1.py
  2. 4999999950000000
  3. python 1.py 28.00s user 5.71s system 102% cpu 32.925 total
  4. > python 2.py
  5. 4999999950000000
  6. python 2.py 26.38s user 6.38s system 103% cpu 31.792 total

所谓“愚蠢”的方法,其实是更有效的方法?

怎么会这样?任何人都可以为我解释一下吗?并希望就在 Numpy 数组上使用函数式编程特性提供一些建议。

欣赏^_^

关于Python numpy 模块-ufunc() 实例源码python中numpy模块的介绍已经告一段落,感谢您的耐心阅读,如果想了解更多关于AttributeError:Float' 对象没有属性日志 /TypeError: ufunc 'log' 不支持输入类型、matplotlib 错误:ufunc 循环不支持 float 类型的参数 0,该参数没有可调用的 rint 方法、NetCDF Python 无法将 ufunc 'multiply' 输出从 dtype('的相关信息,请在本站寻找。

本文标签: