GVKun编程网logo

Python numpy 模块-intc() 实例源码(python numpy interp)

3

在这篇文章中,我们将为您详细介绍Pythonnumpy模块-intc()实例源码的内容,并且讨论关于pythonnumpyinterp的相关问题。此外,我们还会涉及一些关于Jupyter中的Numpy

在这篇文章中,我们将为您详细介绍Python numpy 模块-intc() 实例源码的内容,并且讨论关于python numpy interp的相关问题。此外,我们还会涉及一些关于Jupyter 中的 Numpy 在打印时出错(Python 版本 3.8.8):TypeError: 'numpy.ndarray' object is not callable、numpy.random.random & numpy.ndarray.astype & numpy.arange、numpy.ravel()/numpy.flatten()/numpy.squeeze()、Numpy:数组创建 numpy.arrray() , numpy.arange()、np.linspace ()、数组基本属性的知识,以帮助您更全面地了解这个主题。

本文目录一览:

Python numpy 模块-intc() 实例源码(python numpy interp)

Python numpy 模块-intc() 实例源码(python numpy interp)

Python numpy 模块,intc() 实例源码

我们从Python开源项目中,提取了以下50个代码示例,用于说明如何使用numpy.intc()

项目:extra-trees    作者:allrod5    | 项目源码 | 文件源码
  1. def _validate_X_predict(
  2. self, X: np.ndarray, check_input: bool) -> np.ndarray:
  3. if check_input:
  4. X = check_array(X, dtype=DTYPE, accept_sparse="csr")
  5. if issparse(X) and (X.indices.dtype != np.intc or
  6. X.indptr.dtype != np.intc):
  7. raise ValueError(
  8. "No support for np.int64 index based sparse matrices")
  9.  
  10. n_features = X.shape[1]
  11. if self.n_features_ != n_features:
  12. raise ValueError(
  13. "Number of features of the model must match the input."
  14. " Model n_features is %s and input n_features is %s "
  15. % (self.n_features_, n_features))
  16.  
  17. return X
项目:incubator-airflow-old    作者:apache    | 项目源码 | 文件源码
  1. def default(self, obj):
  2. # convert dates and numpy objects in a json serializable format
  3. if isinstance(obj, datetime):
  4. return obj.strftime(''%Y-%m-%dT%H:%M:%sZ'')
  5. elif isinstance(obj, date):
  6. return obj.strftime(''%Y-%m-%d'')
  7. elif type(obj) in (np.int_, np.intc, np.intp, np.int8, np.int16,
  8. np.int32, np.int64, np.uint8, np.uint16,
  9. np.uint32, np.uint64):
  10. return int(obj)
  11. elif type(obj) in (np.bool_,):
  12. return bool(obj)
  13. elif type(obj) in (np.float_, np.float16, np.float32, np.float64,
  14. np.complex_, np.complex64, np.complex128):
  15. return float(obj)
  16.  
  17. # Let the base class default method raise the TypeError
  18. return json.JSONEncoder.default(self, obj)
项目:scikit-garden    作者:scikit-garden    | 项目源码 | 文件源码
  1. def _validate_X_predict(self, X, check_input):
  2. """Validate X whenever one tries to predict,apply,predict_proba"""
  3. if self.tree_ is None:
  4. raise NotFittedError("Estimator not fitted,"
  5. "call `fit` before exploiting the model.")
  6.  
  7. if check_input:
  8. X = check_array(X, accept_sparse="csr")
  9. if issparse(X) and (X.indices.dtype != np.intc or
  10. X.indptr.dtype != np.intc):
  11. raise ValueError("No support for np.int64 index based "
  12. "sparse matrices")
  13.  
  14. n_features = X.shape[1]
  15. if self.n_features_ != n_features:
  16. raise ValueError("Number of features of the model must "
  17. "match the input. Model n_features is %s and "
  18. "input n_features is %s "
  19. % (self.n_features_, n_features))
  20.  
  21. return X
项目:RoboBohr    作者:bhimmetoglu    | 项目源码 | 文件源码
  1. def pairFeatureMatrix(self, elementList):
  2. """ Construction of pair-distance matrices """
  3.  
  4. # Initiate
  5. nSpecies = len(elementList)
  6.  
  7. # Get the molecular structure
  8. pos = np.array(self.molecule.positions, dtype = float) # Atomic positions
  9. elInd = np.array(self.molecule.elInd, dtype = np.intc) # Element indices matching to elementList
  10. natoms = len(self.molecule.names) # Total number of atoms in the molecule
  11.  
  12. # Initiate the matrix
  13. dim1 = natoms * (natoms -1)/2 # First dimension (pairwise distances)
  14. dim2 = nSpecies * (nSpecies + 1)/2 # Number of possible pairs
  15. featMat = np.zeros((dim1,dim2)) # To be passed to fun_pairFeatures (compiled C code)
  16.  
  17. # Call the C function to store the pairFeatures
  18. pairFeatures.fun_pairFeatures(nSpecies, natoms, elInd, pos, featMat)
  19.  
  20. # Return featMat
  21. return featMat
项目:tensorforce    作者:reinforceio    | 项目源码 | 文件源码
  1. def execute(self, actions):
  2. """
  3. Pass action to universe environment,return reward,next step,terminal state and
  4. additional info.
  5.  
  6. :param action: action to execute as numpy array,should have dtype np.intc and should adhere to
  7. the specification given in DeepMindLabEnvironment.action_spec(level_id)
  8. :return: dict containing the next state,the reward,and a boolean indicating if the
  9. next state is a terminal state
  10. """
  11. adjusted_actions = list()
  12. for action_spec in self.level.action_spec():
  13. if action_spec[''min''] == -1 and action_spec[''max''] == 1:
  14. adjusted_actions.append(actions[action_spec[''name'']] - 1)
  15. else:
  16. adjusted_actions.append(actions[action_spec[''name'']]) # clip?
  17. actions = np.array(adjusted_actions, dtype=np.intc)
  18.  
  19. reward = self.level.step(action=actions, num_steps=self.repeat_action)
  20. state = self.level.observations()[''RGB_INTERLACED'']
  21. terminal = not self.level.is_running()
  22. return state, terminal, reward
项目:airflow    作者:apache-airflow    | 项目源码 | 文件源码
  1. def default(self, date):
  2. return obj.strftime(''%Y-%m-%d'')
  3. elif type(obj) in [np.int_, np.uint64]:
  4. return int(obj)
  5. elif type(obj) in [np.bool_]:
  6. return bool(obj)
  7. elif type(obj) in [np.float_, np.complex128]:
  8. return float(obj)
  9.  
  10. # Let the base class default method raise the TypeError
  11. return json.JSONEncoder.default(self, obj)
项目:rankpy    作者:dmitru    | 项目源码 | 文件源码
  1. def predict(self, queries, n_jobs=1):
  2. ''''''
  3. Predict the ranking score for each individual document of the given queries.
  4.  
  5. n_jobs: int,optional (default is 1)
  6. The number of working threads that will be spawned to compute
  7. the ranking scores. If -1,the current number of cpus will be used.
  8. ''''''
  9. if self.trained is False:
  10. raise ValueError(''the model has not been trained yet'')
  11.  
  12. predictions = np.zeros(queries.document_count(), dtype=np.float64)
  13.  
  14. n_jobs = max(1, min(n_jobs if n_jobs >= 0 else n_jobs + cpu_count() + 1, queries.document_count()))
  15.  
  16. indices = np.linspace(0, queries.document_count(), n_jobs + 1).astype(np.intc)
  17.  
  18. Parallel(n_jobs=n_jobs, backend="threading")(delayed(parallel_helper, check_pickle=False)
  19. (LambdarandomForest, ''_LambdarandomForest__predict'', self.estimators,
  20. queries.feature_vectors[indices[i]:indices[i + 1]],
  21. predictions[indices[i]:indices[i + 1]]) for i in range(indices.size - 1))
  22.  
  23. predictions /= len(self.estimators)
  24.  
  25. return predictions
项目:Theano-Deep-learning    作者:GeekLiB    | 项目源码 | 文件源码
  1. def perform(self, node, inputs, out):
  2. # Todo support broadcast!
  3. # Todo assert all input have the same shape
  4. z, = out
  5. if (z[0] is None or
  6. z[0].shape != inputs[0].shape or
  7. not z[0].is_c_contiguous()):
  8. z[0] = theano.sandBox.cuda.Cudandarray.zeros(inputs[0].shape)
  9. if inputs[0].shape != inputs[1].shape:
  10. raise TypeError("PycudaElemwiseSourceModuleOp:"
  11. " inputs don''t have the same shape!")
  12.  
  13. if inputs[0].size > 512:
  14. grid = (int(numpy.ceil(inputs[0].size / 512.)), 1)
  15. block = (512, 1, 1)
  16. else:
  17. grid = (1, 1)
  18. block = (inputs[0].shape[0], inputs[0].shape[1], 1)
  19. self.pycuda_fct(inputs[0], inputs[1], z[0],
  20. numpy.intc(inputs[1].size), block=block, grid=grid)
项目:Theano-Deep-learning    作者:GeekLiB    | 项目源码 | 文件源码
  1. def make_thunk(self, storage_map, _, _2):
  2. mod = SourceModule("""
  3. __global__ void my_fct(float * i0,float * o0,int size) {
  4. int i = blockIdx.x*blockDim.x + threadIdx.x;
  5. if(i<size){
  6. o0[i] = i0[i]*2;
  7. }
  8. }""")
  9. pycuda_fct = mod.get_function("my_fct")
  10. inputs = [ storage_map[v] for v in node.inputs]
  11. outputs = [ storage_map[v] for v in node.outputs]
  12. def thunk():
  13. z = outputs[0]
  14. if z[0] is None or z[0].shape!=inputs[0][0].shape:
  15. z[0] = cuda.Cudandarray.zeros(inputs[0][0].shape)
  16. grid = (int(numpy.ceil(inputs[0][0].size / 512.)),1)
  17. pycuda_fct(inputs[0][0], numpy.intc(inputs[0][0].size),
  18. block=(512,1,1), grid=grid)
  19.  
  20. return thunk
项目:lim    作者:limix    | 项目源码 | 文件源码
  1. def npy2py_type(npy_type):
  2. int_types = [
  3. np.int_, np.int32,
  4. np.uint8, np.uint32, np.uint64
  5. ]
  6.  
  7. float_types = [np.float_, np.float64]
  8.  
  9. bytes_types = [np.str_, np.string_]
  10.  
  11. if npy_type in int_types:
  12. return int
  13. if npy_type in float_types:
  14. return float
  15. if npy_type in bytes_types:
  16. return bytes
  17.  
  18. if hasattr(npy_type, ''char''):
  19. if npy_type.char in [''S'', ''a'']:
  20. return bytes
  21. raise TypeError
  22.  
  23. return npy_type
项目:Parallel-SGD    作者:angadgill    | 项目源码 | 文件源码
  1. def _validate_X_predict(self, n_features))
  2.  
  3. return X
项目:Parallel-SGD    作者:angadgill    | 项目源码 | 文件源码
  1. def _open_and_load(f, dtype, multilabel, zero_based, query_id):
  2. if hasattr(f, "read"):
  3. actual_dtype, data, ind, indptr, labels, query = \\
  4. _load_svmlight_file(f, query_id)
  5. # XXX remove closing when Python 2.7+/3.1+ required
  6. else:
  7. with closing(_gen_open(f)) as f:
  8. actual_dtype, query = \\
  9. _load_svmlight_file(f, query_id)
  10.  
  11. # convert from array.array,give data the right dtype
  12. if not multilabel:
  13. labels = frombuffer_empty(labels, np.float64)
  14. data = frombuffer_empty(data, actual_dtype)
  15. indices = frombuffer_empty(ind, np.intc)
  16. indptr = np.frombuffer(indptr, dtype=np.intc) # never empty
  17. query = frombuffer_empty(query, np.intc)
  18.  
  19. data = np.asarray(data, dtype=dtype) # no-op for float{32,64}
  20. return data, indices, query
项目:hippylib    作者:hippylib    | 项目源码 | 文件源码
  1. def to_dense(A):
  2. """
  3. Convert a sparse matrix A to dense.
  4. For debugging only.
  5. """
  6. if hasattr(A, "getrow"):
  7. n = A.size(0)
  8. m = A.size(1)
  9. B = np.zeros( (n,m), dtype=np.float64)
  10. for i in range(0,n):
  11. [j, val] = A.getrow(i)
  12. B[i,j] = val
  13.  
  14. return B
  15. else:
  16. x = Vector()
  17. Ax = Vector()
  18. A.init_vector(x,1)
  19. A.init_vector(Ax,0)
  20.  
  21. n = get_local_size(Ax)
  22. m = get_local_size(x)
  23. B = np.zeros( (n, dtype=np.float64)
  24. for i in range(0,m):
  25. i_ind = np.array([i], dtype=np.intc)
  26. x.set_local(np.ones(i_ind.shape), i_ind)
  27. A.mult(x,Ax)
  28. B[:,i] = Ax.get_local()
  29. x.set_local(np.zeros(i_ind.shape), i_ind)
  30.  
  31. return B
项目:slda    作者:Savvysherpa    | 项目源码 | 文件源码
  1. def _create_lookups(self, X):
  2. """
  3. Create document and term lookups for all tokens.
  4. """
  5. docs, terms = np.nonzero(X)
  6. if issparse(X):
  7. x = np.array(X[docs, terms])[0]
  8. else:
  9. x = X[docs, terms]
  10. doc_lookup = np.ascontiguousarray(np.repeat(docs, x), dtype=np.intc)
  11. term_lookup = np.ascontiguousarray(np.repeat(terms, dtype=np.intc)
  12. return doc_lookup, term_lookup
项目:slda    作者:Savvysherpa    | 项目源码 | 文件源码
  1. def _create_edges(self, y, order=''tail''):
  2. y.sort(order=order)
  3. _docs, _counts = np.unique(y[order], return_counts=True)
  4. counts = np.zeros(self.n_docs)
  5. counts[_docs] = _counts
  6. docs = np.ascontiguousarray(
  7. np.concatenate(([0], np.cumsum(counts))), dtype=np.intc)
  8. edges = np.ascontiguousarray(y[''index''].flatten(), dtype=np.intc)
  9. return docs, edges
项目:slda    作者:Savvysherpa    | 项目源码 | 文件源码
  1. def fit(self, y):
  2. """
  3. Estimate the topic distributions per document (theta),term
  4. distributions per topic (phi),and regression coefficients (eta).
  5.  
  6. Parameters
  7. ----------
  8. X : array-like,shape = (n_docs,n_terms)
  9. The document-term matrix.
  10.  
  11. y : array-like,shape = (n_edges,3)
  12. Each entry of y is an ordered triple (d_1,d_2,y_(d_1,d_2)),
  13. where d_1 and d_2 are documents and y_(d_1,d_2) is an indicator of
  14. a directed edge from d_1 to d_2.
  15. """
  16.  
  17. self.doc_term_matrix = X
  18. self.n_docs, self.n_terms = X.shape
  19. self.n_tokens = X.sum()
  20. self.n_edges = y.shape[0]
  21. doc_lookup, term_lookup = self._create_lookups(X)
  22. # edge info
  23. y = np.ascontiguousarray(np.column_stack((range(self.n_edges), y)))
  24. # we use a view here so that we can sort in-place using named columns
  25. y_rec = y.view(dtype=list(zip((''index'', ''tail'', ''head'', ''data''),
  26. 4 * [y.dtype])))
  27. edge_tail = np.ascontiguousarray(y_rec[''tail''].flatten(),
  28. dtype=np.intc)
  29. edge_head = np.ascontiguousarray(y_rec[''head''].flatten(),
  30. dtype=np.intc)
  31. edge_data = np.ascontiguousarray(y_rec[''data''].flatten(),
  32. dtype=np.float64)
  33. out_docs, out_edges = self._create_edges(y_rec, order=''tail'')
  34. in_docs, in_edges = self._create_edges(y_rec, order=''head'')
  35. # iterate
  36. self.theta, self.phi, self.H, self.loglikelihoods = gibbs_sampler_grtm(
  37. self.n_iter, self.n_report_iter, self.n_topics, self.n_docs,
  38. self.n_terms, self.n_tokens, self.n_edges, self.alpha, self.beta,
  39. self.mu, self.nu2, self.b, doc_lookup, term_lookup, out_docs,
  40. out_edges, in_docs, in_edges, edge_tail, edge_head, edge_data,
  41. self.seed)
项目:slda    作者:Savvysherpa    | 项目源码 | 文件源码
  1. def fit(self, hier):
  2. """
  3. Estimate the topic distributions per document (theta),n_labels)
  4. Response values for each document for each labels.
  5.  
  6. hier : 1D array-like,size = n_labels
  7. The index of the list corresponds to the current label
  8. and the value of the indexed position is the parent of the label.
  9. Set -1 as the root.
  10. """
  11.  
  12. self.doc_term_matrix = X
  13. self.n_docs, self.n_terms = X.shape
  14. self.n_tokens = X.sum()
  15. doc_lookup, term_lookup = self._create_lookups(X)
  16.  
  17. # iterate
  18. self.theta, self.eta, self.loglikelihoods = gibbs_sampler_blhslda(
  19. self.n_iter,
  20. self.n_topics, self.n_terms,
  21. self.alpha, self.mu,
  22. term_lookup, np.ascontiguousarray(y, dtype=np.intc),
  23. np.ascontiguousarray(hier, self.seed)
项目:slda    作者:Savvysherpa    | 项目源码 | 文件源码
  1. def _create_lookups(self, term_lookup
项目:slda    作者:Savvysherpa    | 项目源码 | 文件源码
  1. def fit(self,
  2. self.seed)
项目:slda    作者:Savvysherpa    | 项目源码 | 文件源码
  1. def fit(self, self.seed)
项目:radar    作者:amoose136    | 项目源码 | 文件源码
  1. def test_dtype(self):
  2. dt = np.intc
  3. p = ndpointer(dtype=dt)
  4. self.assertTrue(p.from_param(np.array([1], dt)))
  5. dt = ''<i4''
  6. p = ndpointer(dtype=dt)
  7. self.assertTrue(p.from_param(np.array([1], dt)))
  8. dt = np.dtype(''>i4'')
  9. p = ndpointer(dtype=dt)
  10. p.from_param(np.array([1], dt))
  11. self.assertRaises(TypeError, p.from_param,
  12. np.array([1], dt.newbyteorder(''swap'')))
  13. dtnames = [''x'', ''y'']
  14. dtformats = [np.intc, np.float64]
  15. dtdescr = {''names'': dtnames, ''formats'': dtformats}
  16. dt = np.dtype(dtdescr)
  17. p = ndpointer(dtype=dt)
  18. self.assertTrue(p.from_param(np.zeros((10,), dt)))
  19. samedt = np.dtype(dtdescr)
  20. p = ndpointer(dtype=samedt)
  21. self.assertTrue(p.from_param(np.zeros((10, dt)))
  22. dt2 = np.dtype(dtdescr, align=True)
  23. if dt.itemsize != dt2.itemsize:
  24. self.assertRaises(TypeError, np.zeros((10, dt2))
  25. else:
  26. self.assertTrue(p.from_param(np.zeros((10, dt2)))
项目:skboost    作者:hbldh    | 项目源码 | 文件源码
  1. def predict(self, check_input=True):
  2. """Predict class or regression value for X.
  3.  
  4. For a classification model,the predicted class for each sample in X is
  5. returned. For a regression model,the predicted value based on X is
  6. returned.
  7.  
  8. Parameters
  9. ----------
  10. X : array-like of shape = [n_samples,n_features]
  11. The input samples.
  12.  
  13. Returns
  14. -------
  15. y : array of shape = [n_samples] or [n_samples,n_outputs]
  16. The predicted classes,or the predict values.
  17. """
  18. X = check_array(X, accept_sparse="csr")
  19. if issparse(X) and (X.indices.dtype != np.intc or
  20. X.indptr.dtype != np.intc):
  21. raise ValueError("No support for np.int64 index based "
  22. "sparse matrices")
  23.  
  24. n_samples, n_features = X.shape
  25.  
  26. if self.tree_ is None:
  27. raise Exception("Tree not initialized. Perform a fit first")
  28.  
  29. if self.n_features_ != n_features:
  30. raise ValueError("Number of features of the model must "
  31. " match the input. Model n_features is %s and "
  32. " input n_features is %s "
  33. % (self.n_features_, n_features))
  34.  
  35. return (self.tree_.get(''coefficient'') *
  36. (X[:, self.tree_.get(''best_dim'')] > self.tree_.get(''threshold'')) +
  37. self.tree_.get(''constant''))
项目:relaax    作者:deeplearninc    | 项目源码 | 文件源码
  1. def _action(*entries):
  2. return np.array(entries, dtype=np.intc)
项目:pydpc    作者:cwehmeyer    | 项目源码 | 文件源码
  1. def __init__(self, points, fraction):
  2. super(Graph, self).__init__(points, fraction)
  3. self.order = _np.ascontiguousarray(_np.argsort(self.density).astype(_np.intc)[::-1])
  4. self.delta, self.neighbour = _core.get_delta_and_neighbour(
  5. self.order, self.distances, self.max_distance)
项目:pydpc    作者:cwehmeyer    | 项目源码 | 文件源码
  1. def assign(self, min_density, min_delta, border_only=False):
  2. self.min_density = min_density
  3. self.min_delta = min_delta
  4. self.border_only = border_only
  5. if self.autoplot:
  6. self.draw_decision_graph(self.min_density, self.min_delta)
  7. self._get_cluster_indices()
  8. self.membership = _core.get_membership(self.clusters, self.order, self.neighbour)
  9. self.border_density, self.border_member = _core.get_border(
  10. self.kernel_size, self.density, self.membership, self.nclusters)
  11. self.halo_idx, self.core_idx = _core.get_halo(
  12. self.density,
  13. self.border_density, self.border_member.astype(_np.intc), border_only=border_only)
项目:pydpc    作者:cwehmeyer    | 项目源码 | 文件源码
  1. def _get_cluster_indices(self):
  2. self.clusters = _np.intersect1d(
  3. _np.where(self.density > self.min_density)[0],
  4. _np.where(self.delta > self.min_delta)[0], assume_unique=True).astype(_np.intc)
  5. self.nclusters = self.clusters.shape[0]
项目:pydpc    作者:cwehmeyer    | 项目源码 | 文件源码
  1. def _get_membership(self):
  2. self.membership = -1 * _np.ones(shape=self.order.shape, dtype=_np.intc)
  3. for i in range(self.ncl):
  4. self.membership[self.clusters[i]] = i
  5. for i in range(self.npoints):
  6. if self.membership[self.order[i]] == -1:
  7. self.membership[self.order[i]] = self.membership[self.neighbour[self.order[i]]]
项目:krpcScripts    作者:jwvanderbeck    | 项目源码 | 文件源码
  1. def test_dtype(self):
  2. dt = np.intc
  3. p = ndpointer(dtype=dt)
  4. self.assertTrue(p.from_param(np.array([1], dt2)))
项目:rl_3d    作者:avdmitry    | 项目源码 | 文件源码
  1. def MapActions(self, action_raw):
  2. self.action = np.zeros([self.num_actions])
  3.  
  4. if (action_raw == 0):
  5. self.action[self.indices["LOOK_LEFT_RIGHT_PIXELS_PER_FRAME"]] = -25
  6. elif (action_raw == 1):
  7. self.action[self.indices["LOOK_LEFT_RIGHT_PIXELS_PER_FRAME"]] = 25
  8.  
  9. """if (action_raw==2):
  10. self.action[self.indices["LOOK_DOWN_UP_PIXELS_PER_FRAME"]] = -25
  11. elif (action_raw==3):
  12. self.action[self.indices["LOOK_DOWN_UP_PIXELS_PER_FRAME"]] = 25
  13.  
  14. if (action_raw==4):
  15. self.action[self.indices["STRAFE_LEFT_RIGHT"]] = -1
  16. elif (action_raw==5):
  17. self.action[self.indices["STRAFE_LEFT_RIGHT"]] = 1
  18.  
  19. if (action_raw==6):
  20. self.action[self.indices["MOVE_BACK_FORWARD"]] = -1
  21. el"""
  22. if (action_raw == 2): # 7
  23. self.action[self.indices["MOVE_BACK_FORWARD"]] = 1
  24.  
  25. # all binary actions need reset
  26. """if (action_raw==8):
  27. self.action[self.indices["FIRE"]] = 0
  28. elif (action_raw==9):
  29. self.action[self.indices["FIRE"]] = 1
  30.  
  31. if (action_raw==10):
  32. self.action[self.indices["JUMP"]] = 0
  33. elif (action_raw==11):
  34. self.action[self.indices["JUMP"]] = 1
  35.  
  36. if (action_raw==12):
  37. self.action[self.indices["CROUCH"]] = 0
  38. elif (action_raw==13):
  39. self.action[self.indices["CROUCH"]] = 1"""
  40.  
  41. return np.clip(self.action, self.mins, self.maxs).astype(np.intc)
项目:chainer-deconv    作者:germanRos    | 项目源码 | 文件源码
  1. def _to_ctypes_array(tup, dtype=numpy.intc):
  2. return numpy.array(tup, dtype=dtype).ctypes
项目:PyDataLondon29-EmbarrassinglyParallelDAWithAWSLambda    作者:SignalMedia    | 项目源码 | 文件源码
  1. def test_dtype(self):
  2. dt = np.intc
  3. p = ndpointer(dtype=dt)
  4. self.assertTrue(p.from_param(np.array([1], dt2)))
项目:SVPV    作者:VCCRI    | 项目源码 | 文件源码
  1. def __init__(self, bins, mapq_thresh=30, clip_thresh=1):
  2. # set parameters
  3. self.bins = bins
  4. self.mapQT = mapq_thresh
  5. self.clip_thresh = clip_thresh
  6.  
  7. # initialise data structures
  8. self.depth_stats = DepthStats(bins, mapq_thresh=mapq_thresh, dtype=np.intc)
  9. self.aln_stats = np.zeros((bins.num, len(AlignStats.aln_stats_cols)), dtype=np.intc)
  10. self.fwd_inserts = np.empty(bins.num, dtype=list)
  11. self.rvs_inserts = np.empty(bins.num, dtype=list)
  12. for j in range(0, bins.num):
  13. self.fwd_inserts[j] = []
  14. self.rvs_inserts[j] = []
项目:GAPSAFE_SGL    作者:EugeneNdiaye    | 项目源码 | 文件源码
  1. def generate_data(n_samples, n_features, size_groups, rho=0.5,
  2. random_state=24):
  3. """ Data generation process with Toplitz like correlated features:
  4. this correspond to the synthetic dataset used in our paper
  5. "GAP Safe Screening Rules for Sparse-Group Lasso".
  6.  
  7. """
  8.  
  9. rng = check_random_state(random_state)
  10. n_groups = len(size_groups)
  11. # g_start = np.zeros(n_groups,order=''F'',dtype=np.intc)
  12. # for i in range(1,n_groups):
  13. # g_start[i] = size_groups[i - 1] + g_start[i - 1]
  14. g_start = np.cumsum(size_groups, dtype=np.intc) - size_groups[0]
  15.  
  16. # 10% of groups are actives
  17. gamma1 = int(np.ceil(n_groups * 0.1))
  18. selected_groups = rng.random_integers(0, n_groups - 1, gamma1)
  19. true_beta = np.zeros(n_features)
  20.  
  21. for i in selected_groups:
  22.  
  23. begin = g_start[i]
  24. end = g_start[i] + size_groups[i]
  25. # 10% of features are actives
  26. gamma2 = int(np.ceil(size_groups[i] * 0.1))
  27. selected_features = rng.random_integers(begin, end - 1, gamma2)
  28.  
  29. ns = len(selected_features)
  30. s = 2 * rng.rand(ns) - 1
  31. u = rng.rand(ns)
  32. true_beta[selected_features] = np.sign(s) * (10 * u + (1 - u) * 0.5)
  33.  
  34. vect = rho ** np.arange(n_features)
  35. covar = toeplitz(vect, vect)
  36.  
  37. X = rng.multivariate_normal(np.zeros(n_features), covar, n_samples)
  38. y = np.dot(X, true_beta) + 0.01 * rng.normal(0, n_samples)
  39.  
  40. return X, y
项目:aws-lambda-numpy    作者:vitolimandibhrata    | 项目源码 | 文件源码
  1. def test_dtype(self):
  2. dt = np.intc
  3. p = ndpointer(dtype=dt)
  4. self.assertTrue(p.from_param(np.array([1], dt2)))
项目:qtpandas    作者:draperjames    | 项目源码 | 文件源码
  1. def expected_support():
  2. numpy_datatypes = [numpy.bool_, numpy.bool, numpy.int_,
  3. numpy.intc, numpy.intp, numpy.int8,
  4. numpy.int16, numpy.int32, numpy.int64,
  5. numpy.uint8, numpy.uint16, numpy.uint32,
  6. numpy.uint64, numpy.float_, numpy.float16,
  7. numpy.float32, numpy.float64]
  8.  
  9. python_datatypes = [bool, int, float, object]
  10.  
  11. return numpy_datatypes + python_datatypes
项目:lambda-numba    作者:rlhotovy    | 项目源码 | 文件源码
  1. def test_dtype(self):
  2. dt = np.intc
  3. p = ndpointer(dtype=dt)
  4. self.assertTrue(p.from_param(np.array([1], dt2)))
项目:deliver    作者:orchestor    | 项目源码 | 文件源码
  1. def test_dtype(self):
  2. dt = np.intc
  3. p = ndpointer(dtype=dt)
  4. self.assertTrue(p.from_param(np.array([1], dt2)))
项目:rankpy    作者:dmitru    | 项目源码 | 文件源码
  1. def predict_rankings(self, compact=False, n_jobs=1):
  2. ''''''
  3. Predict rankings of the documents for the given queries.
  4.  
  5. If `compact` is set to True then the output will be one
  6. long 1d array containing the rankings for all the queries
  7. instead of a list of 1d arrays.
  8.  
  9. The compact array can be subsequently index using query
  10. index pointer array,see `queries.query_indptr`.
  11.  
  12. query: Query
  13. The query whose documents should be ranked.
  14.  
  15. compact: bool
  16. Specify to return rankings in compact format.
  17.  
  18. n_jobs: int,the current number of cpus will be used.
  19. ''''''
  20. # Predict the ranking scores for the documents.
  21. predictions = self.predict(queries, n_jobs)
  22.  
  23. rankings = np.zeros(queries.document_count(), dtype=np.intc)
  24.  
  25. ranksort_queries(queries.query_indptr, predictions, rankings)
  26.  
  27. if compact or len(queries) == 1:
  28. return rankings
  29. else:
  30. return np.array_split(rankings, queries.query_indptr[1:-1])
项目:rankpy    作者:dmitru    | 项目源码 | 文件源码
  1. def predict_rankings(self, n_jobs=1):
  2. ''''''
  3. Predict rankings of the documents for the given queries.
  4.  
  5. If `compact` is set to True then the output will be one
  6. long 1d array containing the rankings for all the queries
  7. instead of a list of 1d arrays.
  8.  
  9. The compact array can be subsequently index using query
  10. index pointer array,the current number of cpus will be used.
  11. ''''''
  12. if self.trained is False:
  13. raise ValueError(''the model has not been trained yet'')
  14.  
  15. # Predict the ranking scores for the documents.
  16. predictions = self.predict(queries, rankings)
  17.  
  18. if compact or queries.query_count() == 1:
  19. return rankings
  20. else:
  21. return np.array_split(rankings, queries.query_indptr[1:-1])
项目:rankpy    作者:dmitru    | 项目源码 | 文件源码
  1. def compute_scale(self, relevance_scores=None):
  2. ''''''
  3. Return the ideal DCG value for each query. Optionally,external
  4. relevance assessments can be used instead of the relevances
  5. present in the queries.
  6.  
  7. Parameters
  8. ----------
  9. queries: Queries
  10. The queries for which the ideal DCG should be computed.
  11.  
  12. relevance_scores: array of integers,optional,(default is None)
  13. The relevance scores that should be used instead of the
  14. relevance scores inside queries. Note,this argument is
  15. experimental.
  16. ''''''
  17. ideal_values = np.empty(queries.query_count(), dtype=np.float64)
  18.  
  19. if relevance_scores is not None:
  20. if queries.document_count() != relevance_scores.shape[0]:
  21. raise ValueError(''number of documents and relevance scores do not match'')
  22.  
  23. # Need to sort the relevance labels first.
  24. indices = np.empty(relevance_scores.shape[0], dtype=np.intc)
  25. relevance_argsort_v1(relevance_scores, relevance_scores.shape[0])
  26. # Creates a copy.
  27. relevance_scores = relevance_scores[indices]
  28. else:
  29. # Assuming these are sorted.
  30. relevance_scores = queries.relevance_scores
  31.  
  32. self.metric_.evaluate_queries_ideal(queries.query_indptr, relevance_scores, ideal_values)
  33.  
  34. return ideal_values
项目:rankpy    作者:dmitru    | 项目源码 | 文件源码
  1. def evaluate(self, ranking=None, labels=None, ranked_labels=None, scales=None):
  2. ''''''
  3. Evaluate NDCG metric on the specified ranked list of document relevance scores.
  4.  
  5. The function input can be either ranked list of relevance labels (`ranked_labels`),
  6. which is most convenient from the computational point of view,or it can be in
  7. the form of ranked list of documents (`ranking`) and corresponding relevance scores
  8. (`labels`),from which the ranked document relevance labels are computed.
  9.  
  10. Parameters:
  11. -----------
  12. ranking: array,shape = (n_documents,)
  13. Specify list of ranked documents.
  14.  
  15. labels: array: shape = (n_documents,)
  16. Specify relevance score for each document.
  17.  
  18. ranked_labels: array,)
  19. Relevance scores of the ranked documents. If not given,then
  20. `ranking` and `labels` must not be None,`ranked_labels` will
  21. be than inferred from them.
  22.  
  23. scales: float,optional (default is None)
  24. The ideal DCG value on the given documents. If None is given
  25. it will be computed from the document relevance scores.
  26. ''''''
  27. if ranked_labels is not None:
  28. return self.get_score_from_labels_list(ranked_labels)
  29. elif ranking is not None and labels is not None:
  30. if ranking.shape[0] != labels.shape[0]:
  31. raise ValueError(''number of ranked documents != number of relevance labels (%d,%d)'' \\
  32. % (ranking.shape[0], labels.shape[0]))
  33. ranked_labels = np.array(sorted(labels, key=dict(zip(labels,ranking)).get, reverse=True), dtype=np.intc)
  34. return self.get_score_from_labels_list(ranked_labels)
项目:rankpy    作者:dmitru    | 项目源码 | 文件源码
  1. def _get_partition_indices(start, end, n_jobs):
  2. ''''''
  3. Get boundary indices for ``n_jobs`` number of sub-arrays dividing
  4. a (contiguous) array of indices starting with ``start`` (inclusive)
  5. and ending with ``end`` (exclusive) into equal parts.
  6. ''''''
  7. if (end - start) >= n_jobs:
  8. return np.linspace(start, n_jobs + 1).astype(np.intc)
  9. else:
  10. return np.arange(end - start + 1, dtype=np.intc)
项目:rankpy    作者:dmitru    | 项目源码 | 文件源码
  1. def save_as_text(self, filepath, shuffle=False):
  2. ''''''
  3. Save queries into the specified file in svmlight format.
  4.  
  5. Parameters:
  6. -----------
  7. filepath: string
  8. The filepath where this object will be saved.
  9.  
  10. shuffle: bool
  11. Specify to shuffle the query document lists prior
  12. to writing into the file.
  13. ''''''
  14. # Inflate the query_ids array such that each id covers
  15. # the corresponding feature vectors.
  16. query_ids = np.fromiter(
  17. chain(*[[qid] * cnt for qid, cnt in zip(self.query_ids, np.diff(self.query_indptr))]),
  18. dtype=int)
  19.  
  20. relevance_scores = self.relevance_scores
  21. feature_vectors = self.feature_vectors
  22.  
  23. if shuffle:
  24. shuffle_indices = np.random.permutation(self.document_count())
  25. reshuffle_indices = np.argsort(query_ids[shuffle_indices])
  26. document_shuffle_indices = np.arange(self.document_count(),
  27. dtype=np.intc)[shuffle_indices[reshuffle_indices]]
  28. query_ids = query_ids[document_shuffle_indices]
  29. relevance_scores = relevance_scores[document_shuffle_indices]
  30. feature_vectors = feature_vectors[document_shuffle_indices]
  31.  
  32. with open(filepath, ''w'') as ofile:
  33. for score, qid, feature_vector in zip(relevance_scores,
  34. query_ids,
  35. feature_vectors):
  36. ofile.write(''%d'' % score)
  37. ofile.write('' qid:%d'' % qid)
  38. for feature in zip(self.feature_indices, feature_vector):
  39. output = '' %d:%.12f'' % feature
  40. ofile.write(output.rstrip(''0'').rstrip(''.''))
  41. ofile.write(''\\n'')
项目:episodic_control    作者:miyosuda    | 项目源码 | 文件源码
  1. def _action(*entries):
  2. return np.array(entries, dtype=np.intc)
项目:LSDMap    作者:ClementiGroup    | 项目源码 | 文件源码
  1. def get_idxs_thread(comm, npoints):
  2. """ Get indices for processor using Scatterv
  3.  
  4. Note:
  5. -----
  6. Uppercase mpi4py functions require everything to be in C-compatible
  7. types or they will return garbage!
  8. """
  9.  
  10. size = comm.Get_size()
  11. rank = comm.Get_rank()
  12.  
  13. npoints_thread = np.zeros(size,dtype=np.intc)
  14. offsets_thread = np.zeros(size,dtype=np.intc)
  15.  
  16. for idx in range(size):
  17. npoints_thread[idx] = npoints/size
  18. offsets_thread[idx] = sum(npoints_thread[:idx])
  19.  
  20. for idx in range(npoints % size):
  21. npoints_thread[idx] += 1
  22. offsets_thread[idx + 1:] += 1
  23.  
  24. npoints_thread = tuple(npoints_thread)
  25. offsets_thread = tuple(offsets_thread)
  26.  
  27. idxs_thread = np.zeros(npoints_thread[rank],dtype=np.intc)
  28. idxs = np.arange(npoints,dtype=np.intc)
  29.  
  30. comm.Scatterv((idxs, npoints_thread, offsets_thread, MPI.INT), idxs_thread, root=0)
  31. return idxs_thread, offsets_thread
项目:LSDMap    作者:ClementiGroup    | 项目源码 | 文件源码
  1. def get_ravel_offsets(npoints_thread,natoms):
  2. """ Get lengths and offsets for gathering trajectory fragments """
  3. size = len(npoints_thread)
  4. ravel_lengths = np.zeros(size,dtype=np.intc)
  5. ravel_offsets = np.zeros(size,dtype=np.intc)
  6.  
  7. for i in range(size):
  8. ravel_lengths[i] = npoints_thread[i]*3*natoms
  9. ravel_offsets[i] = sum(ravel_lengths[:i])
  10.  
  11. ravel_lengths = tuple(ravel_lengths)
  12. ravel_offsets = tuple(ravel_offsets)
  13.  
  14. return ravel_lengths, ravel_offsets
项目:Parallel-SGD    作者:angadgill    | 项目源码 | 文件源码
  1. def _count_vocab(self, raw_documents, fixed_vocab):
  2. """Create sparse feature matrix,and vocabulary where fixed_vocab=False
  3. """
  4. if fixed_vocab:
  5. vocabulary = self.vocabulary_
  6. else:
  7. # Add a new value when a new vocabulary item is seen
  8. vocabulary = defaultdict()
  9. vocabulary.default_factory = vocabulary.__len__
  10.  
  11. analyze = self.build_analyzer()
  12. j_indices = _make_int_array()
  13. indptr = _make_int_array()
  14. indptr.append(0)
  15. for doc in raw_documents:
  16. for feature in analyze(doc):
  17. try:
  18. j_indices.append(vocabulary[feature])
  19. except KeyError:
  20. # Ignore out-of-vocabulary items for fixed_vocab=True
  21. continue
  22. indptr.append(len(j_indices))
  23.  
  24. if not fixed_vocab:
  25. # disable defaultdict behavIoUr
  26. vocabulary = dict(vocabulary)
  27. if not vocabulary:
  28. raise ValueError("empty vocabulary; perhaps the documents only"
  29. " contain stop words")
  30.  
  31. j_indices = frombuffer_empty(j_indices, dtype=np.intc)
  32. indptr = np.frombuffer(indptr, dtype=np.intc)
  33. values = np.ones(len(j_indices))
  34.  
  35. X = sp.csr_matrix((values, j_indices, indptr),
  36. shape=(len(indptr) - 1, len(vocabulary)),
  37. dtype=self.dtype)
  38. X.sum_duplicates()
  39. return vocabulary, X
项目:Alfred    作者:jkachhadia    | 项目源码 | 文件源码
  1. def test_dtype(self):
  2. dt = np.intc
  3. p = ndpointer(dtype=dt)
  4. self.assertTrue(p.from_param(np.array([1], dt2)))
项目:2016CCF_BDCI_Sougou    作者:coderSkyChen    | 项目源码 | 文件源码
  1. def _count_vocab(self,and vocabulary where fixed_vocab=False
  2. """
  3. if fixed_vocab:
  4. vocabulary = self.vocabulary_
  5. else:
  6. # Add a new value when a new vocabulary item is seen
  7. vocabulary = defaultdict()
  8. vocabulary.default_factory = vocabulary.__len__
  9.  
  10. analyze = self.build_analyzer()
  11. j_indices = []
  12. indptr = _make_int_array()
  13. values = _make_int_array()
  14. indptr.append(0)
  15. for doc in raw_documents:
  16. feature_counter = {}
  17. for feature in analyze(doc):
  18. try:
  19. feature_idx = vocabulary[feature]
  20. if feature_idx not in feature_counter:
  21. feature_counter[feature_idx] = 1
  22. else:
  23. feature_counter[feature_idx] += 1
  24. except KeyError:
  25. # Ignore out-of-vocabulary items for fixed_vocab=True
  26. continue
  27.  
  28. j_indices.extend(feature_counter.keys())
  29. values.extend(feature_counter.values())
  30. indptr.append(len(j_indices))
  31.  
  32. if not fixed_vocab:
  33. # disable defaultdict behavIoUr
  34. vocabulary = dict(vocabulary)
  35. if not vocabulary:
  36. raise ValueError("empty vocabulary; perhaps the documents only"
  37. " contain stop words")
  38.  
  39. j_indices = np.asarray(j_indices, dtype=np.intc)
  40. values = frombuffer_empty(values, dtype=np.intc)
  41.  
  42. X = sp.csr_matrix((values,
  43. dtype=self.dtype)
  44. X.sort_indices()
  45. return vocabulary, X
项目:2016CCF_BDCI_Sougou    作者:coderSkyChen    | 项目源码 | 文件源码
  1. def _count_vocab_2(self,and vocabulary where fixed_vocab=False
  2. """
  3. if fixed_vocab:
  4. vocabulary = self.vocabulary_
  5. else:
  6. # Add a new value when a new vocabulary item is seen
  7. vocabulary = defaultdict()
  8. vocabulary.default_factory = vocabulary.__len__
  9.  
  10. analyze = self.build_analyzer()
  11. j_indices = []
  12. indptr = _make_int_array()
  13. # values = _make_int_array()
  14. values = array.array(str("f"))
  15. indptr.append(0)
  16. for doc in raw_documents:
  17. feature_counter = {}
  18. for feature in analyze(doc):
  19. try:
  20. feature_idx = vocabulary[feature]
  21. if feature_idx not in feature_counter:
  22. feature_counter[feature_idx] = 1
  23. else:
  24. feature_counter[feature_idx] += 1
  25. except KeyError:
  26. # Ignore out-of-vocabulary items for fixed_vocab=True
  27. continue
  28.  
  29. j_indices.extend(feature_counter.keys())
  30. values.extend([i * 1.0 / sum(feature_counter.values()) for i in feature_counter.values()])
  31. indptr.append(len(j_indices))
  32.  
  33. if not fixed_vocab:
  34. # disable defaultdict behavIoUr
  35. vocabulary = dict(vocabulary)
  36. if not vocabulary:
  37. raise ValueError("empty vocabulary; perhaps the documents only"
  38. " contain stop words")
  39.  
  40. j_indices = np.asarray(j_indices, dtype=np.float32)
  41.  
  42. X = sp.csr_matrix((values, len(vocabulary)))
  43. X.sort_indices()
  44. return vocabulary, X

Jupyter 中的 Numpy 在打印时出错(Python 版本 3.8.8):TypeError: 'numpy.ndarray' object is not callable

Jupyter 中的 Numpy 在打印时出错(Python 版本 3.8.8):TypeError: 'numpy.ndarray' object is not callable

如何解决Jupyter 中的 Numpy 在打印时出错(Python 版本 3.8.8):TypeError: ''numpy.ndarray'' object is not callable?

晚安, 尝试打印以下内容时,我在 jupyter 中遇到了 numpy 问题,并且得到了一个 错误: 需要注意的是python版本是3.8.8。 我先用 spyder 测试它,它运行正确,它给了我预期的结果

使用 Spyder:

import numpy as np
    for i in range (5):
        n = np.random.rand ()
    print (n)
Results
0.6604903457995978
0.8236300859753154
0.16067650689842816
0.6967868357083673
0.4231597934445466

现在有了 jupyter

import numpy as np
    for i in range (5):
        n = np.random.rand ()
    print (n)
-------------------------------------------------- ------
TypeError Traceback (most recent call last)
<ipython-input-78-0c6a801b3ea9> in <module>
       2 for i in range (5):
       3 n = np.random.rand ()
---->  4 print (n)

       TypeError: ''numpy.ndarray'' object is not callable

感谢您对我如何在 Jupyter 中解决此问题的帮助。

非常感谢您抽出宝贵时间。

阿特,约翰”

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)

numpy.random.random & numpy.ndarray.astype & numpy.arange

numpy.random.random & numpy.ndarray.astype & numpy.arange

今天看到这样一句代码:

xb = np.random.random((nb, d)).astype(''float32'') #创建一个二维随机数矩阵(nb行d列)
xb[:, 0] += np.arange(nb) / 1000. #将矩阵第一列的每个数加上一个值

要理解这两句代码需要理解三个函数

1、生成随机数

numpy.random.random(size=None) 

size为None时,返回float。

size不为None时,返回numpy.ndarray。例如numpy.random.random((1,2)),返回1行2列的numpy数组

 

2、对numpy数组中每一个元素进行类型转换

numpy.ndarray.astype(dtype)

返回numpy.ndarray。例如 numpy.array([1, 2, 2.5]).astype(int),返回numpy数组 [1, 2, 2]

 

3、获取等差数列

numpy.arange([start,]stop,[step,]dtype=None)

功能类似python中自带的range()和numpy中的numpy.linspace

返回numpy数组。例如numpy.arange(3),返回numpy数组[0, 1, 2]

numpy.ravel()/numpy.flatten()/numpy.squeeze()

numpy.ravel()/numpy.flatten()/numpy.squeeze()

numpy.ravel(a, order=''C'')

  Return a flattened array

numpy.chararray.flatten(order=''C'')

  Return a copy of the array collapsed into one dimension

numpy.squeeze(a, axis=None)

  Remove single-dimensional entries from the shape of an array.

 

相同点: 将多维数组 降为 一维数组

不同点:

  ravel() 返回的是视图(view),意味着改变元素的值会影响原始数组元素的值;

  flatten() 返回的是拷贝,意味着改变元素的值不会影响原始数组;

  squeeze()返回的是视图(view),仅仅是将shape中dimension为1的维度去掉;

 

ravel()示例:

 1 import matplotlib.pyplot as plt
 2 import numpy as np
 3 
 4 def log_type(name,arr):
 5     print("数组{}的大小:{}".format(name,arr.size))
 6     print("数组{}的维度:{}".format(name,arr.shape))
 7     print("数组{}的维度:{}".format(name,arr.ndim))
 8     print("数组{}元素的数据类型:{}".format(name,arr.dtype))
 9     #print("数组:{}".format(arr.data))
10     
11 a = np.floor(10*np.random.random((3,4)))
12 print(a)
13 log_type(''a'',a)
14 
15 a1 = a.ravel()
16 print("a1:{}".format(a1))
17 log_type(''a1'',a1)
18 a1[2] = 100
19 
20 print(a)
21 log_type(''a'',a)

 

flatten()示例

 1 import matplotlib.pyplot as plt
 2 import numpy as np
 3 
 4 def log_type(name,arr):
 5     print("数组{}的大小:{}".format(name,arr.size))
 6     print("数组{}的维度:{}".format(name,arr.shape))
 7     print("数组{}的维度:{}".format(name,arr.ndim))
 8     print("数组{}元素的数据类型:{}".format(name,arr.dtype))
 9     #print("数组:{}".format(arr.data))
10     
11 a = np.floor(10*np.random.random((3,4)))
12 print(a)
13 log_type(''a'',a)
14 
15 a1 = a.flatten()
16 print("修改前a1:{}".format(a1))
17 log_type(''a1'',a1)
18 a1[2] = 100
19 print("修改后a1:{}".format(a1))
20 
21 print("a:{}".format(a))
22 log_type(''a'',a)

 

squeeze()示例:

1. 没有single-dimensional entries的情况

 1 import matplotlib.pyplot as plt
 2 import numpy as np
 3 
 4 def log_type(name,arr):
 5     print("数组{}的大小:{}".format(name,arr.size))
 6     print("数组{}的维度:{}".format(name,arr.shape))
 7     print("数组{}的维度:{}".format(name,arr.ndim))
 8     print("数组{}元素的数据类型:{}".format(name,arr.dtype))
 9     #print("数组:{}".format(arr.data))
10     
11 a = np.floor(10*np.random.random((3,4)))
12 print(a)
13 log_type(''a'',a)
14 
15 a1 = a.squeeze()
16 print("修改前a1:{}".format(a1))
17 log_type(''a1'',a1)
18 a1[2] = 100
19 print("修改后a1:{}".format(a1))
20 
21 print("a:{}".format(a))
22 log_type(''a'',a)

从结果中可以看到,当没有single-dimensional entries时,squeeze()返回额数组对象是一个view,而不是copy。

 

2. 有single-dimentional entries 的情况

 1 import matplotlib.pyplot as plt
 2 import numpy as np
 3 
 4 def log_type(name,arr):
 5     print("数组{}的大小:{}".format(name,arr.size))
 6     print("数组{}的维度:{}".format(name,arr.shape))
 7     print("数组{}的维度:{}".format(name,arr.ndim))
 8     print("数组{}元素的数据类型:{}".format(name,arr.dtype))
 9     #print("数组:{}".format(arr.data))
10 
11 a = np.floor(10*np.random.random((1,3,4)))
12 print(a)
13 log_type(''a'',a)
14 
15 a1 = a.squeeze()
16 print("修改前a1:{}".format(a1))
17 log_type(''a1'',a1)
18 a1[2] = 100
19 print("修改后a1:{}".format(a1))
20 
21 print("a:{}".format(a))
22 log_type(''a'',a)

 

Numpy:数组创建 numpy.arrray() , numpy.arange()、np.linspace ()、数组基本属性

Numpy:数组创建 numpy.arrray() , numpy.arange()、np.linspace ()、数组基本属性

一、Numpy数组创建

 part 1:np.linspace(起始值,终止值,元素总个数

 

import numpy as np
''''''
numpy中的ndarray数组
''''''

ary = np.array([1, 2, 3, 4, 5])
print(ary)
ary = ary * 10
print(ary)

''''''
ndarray对象的创建
''''''
# 创建二维数组
# np.array([[],[],...])
a = np.array([[1, 2, 3, 4], [5, 6, 7, 8]])
print(a)

# np.arange(起始值, 结束值, 步长(默认1))
b = np.arange(1, 10, 1)
print(b)

print("-------------np.zeros(数组元素个数, dtype=''数组元素类型'')-----")
# 创建一维数组:
c = np.zeros(10)
print(c, ''; c.dtype:'', c.dtype)

# 创建二维数组:
print(np.zeros ((3,4)))

print("----------np.ones(数组元素个数, dtype=''数组元素类型'')--------")
# 创建一维数组:
d = np.ones(10, dtype=''int64'')
print(d, ''; d.dtype:'', d.dtype)

# 创建三维数组:
print(np.ones( (2,3,4), dtype=np.int32 ))
# 打印维度
print(np.ones( (2,3,4), dtype=np.int32 ).ndim)  # 返回:3(维)

 

结果图:

 

part 2 :np.linspace ( 起始值,终止值,元素总个数)

 

import numpy as np
a = np.arange( 10, 30, 5 )

b = np.arange( 0, 2, 0.3 )

c = np.arange(12).reshape(4,3)

d = np.random.random((2,3))  # 取-1到1之间的随机数,要求设置为诶2行3列的结构

print(a)
print(b)
print(c)
print(d)

print("-----------------")
from numpy import pi
print(np.linspace( 0, 2*pi, 100 ))

print("-------------np.linspace(起始值,终止值,元素总个数)------------------")
print(np.sin(np.linspace( 0, 2*pi, 100 )))

 

结果图:

 

 

 

 

二、Numpy的ndarray对象属性:

数组的结构:array.shape

数组的维度:array.ndim

元素的类型:array.dtype

数组元素的个数:array.size

数组的索引(下标):array[0]

 

''''''
数组的基本属性
''''''
import numpy as np

print("--------------------案例1:------------------------------")
a = np.arange(15).reshape(3, 5)
print(a)
print(a.shape)     # 打印数组结构
print(len(a))      # 打印有多少行
print(a.ndim)     # 打印维度
print(a.dtype)    # 打印a数组内的元素的数据类型
# print(a.dtype.name)
print(a.size)    # 打印数组的总元素个数


print("-------------------案例2:---------------------------")
a = np.array([[1, 2, 3], [4, 5, 6]])
print(a)

# 测试数组的基本属性
print(''a.shape:'', a.shape)
print(''a.size:'', a.size)
print(''len(a):'', len(a))
# a.shape = (6, )  # 此格式可将原数组结构变成1行6列的数据结构
# print(a, ''a.shape:'', a.shape)

# 数组元素的索引
ary = np.arange(1, 28)
ary.shape = (3, 3, 3)   # 创建三维数组
print("ary.shape:",ary.shape,"\n",ary )

print("-----------------")
print(''ary[0]:'', ary[0])
print(''ary[0][0]:'', ary[0][0])
print(''ary[0][0][0]:'', ary[0][0][0])
print(''ary[0,0,0]:'', ary[0, 0, 0])

print("-----------------")


# 遍历三维数组:遍历出数组里的每个元素
for i in range(ary.shape[0]):
    for j in range(ary.shape[1]):
        for k in range(ary.shape[2]):
            print(ary[i, j, k], end='' '')
            

 

结果图:

 

今天的关于Python numpy 模块-intc() 实例源码python numpy interp的分享已经结束,谢谢您的关注,如果想了解更多关于Jupyter 中的 Numpy 在打印时出错(Python 版本 3.8.8):TypeError: 'numpy.ndarray' object is not callable、numpy.random.random & numpy.ndarray.astype & numpy.arange、numpy.ravel()/numpy.flatten()/numpy.squeeze()、Numpy:数组创建 numpy.arrray() , numpy.arange()、np.linspace ()、数组基本属性的相关知识,请在本站进行查询。

本文标签: