GVKun编程网logo

Python numpy 模块-in1d() 实例源码(python中numpy模块)

1

本篇文章给大家谈谈Pythonnumpy模块-in1d()实例源码,以及python中numpy模块的知识点,同时本文还将给你拓展Jupyter中的Numpy在打印时出错(Python版本3.8.8)

本篇文章给大家谈谈Python numpy 模块-in1d() 实例源码,以及python中numpy模块的知识点,同时本文还将给你拓展Jupyter 中的 Numpy 在打印时出错(Python 版本 3.8.8):TypeError: 'numpy.ndarray' object is not callable、Numpy .in1d 方法无法正确评估数组与数组视图?、numpy.random.random & numpy.ndarray.astype & numpy.arange、numpy.ravel()/numpy.flatten()/numpy.squeeze()等相关知识,希望对各位有所帮助,不要忘了收藏本站喔。

本文目录一览:

Python numpy 模块-in1d() 实例源码(python中numpy模块)

Python numpy 模块-in1d() 实例源码(python中numpy模块)

Python numpy 模块,in1d() 实例源码

我们从Python开源项目中,提取了以下50个代码示例,用于说明如何使用numpy.in1d()

项目:NeoAnalysis    作者:neoanalysis    | 项目源码 | 文件源码
  1. def take_slice_of_analogsignalarray_by_channelindex(self,
  2. channel_indexes=None):
  3. ''''''
  4. Return slices of the :class:`AnalogSignalArrays` in the
  5. :class:`Segment` that correspond to the :attr:`channel_indexes`
  6. provided.
  7. ''''''
  8. if channel_indexes is None:
  9. return []
  10.  
  11. sliced_sigarrays = []
  12. for sigarr in self.analogsignals:
  13. if sigarr.get_channel_index() is not None:
  14. ind = np.in1d(sigarr.get_channel_index(), channel_indexes)
  15. sliced_sigarrays.append(sigarr[:, ind])
  16.  
  17. return sliced_sigarrays
项目:NeoAnalysis    作者:neoanalysis    | 项目源码 | 文件源码
  1. def take_slice_of_analogsignalarray_by_channelindex(self, ind])
  2.  
  3. return sliced_sigarrays
项目:PleioPred    作者:yiminghu    | 项目源码 | 文件源码
  1. def get_1000G_snps(sumstats, out_file):
  2. sf = np.loadtxt(sumstats,dtype=str,skiprows=1)
  3. h5f = h5py.File(''ref/Misc/1000G_SNP_info.h5'',''r'')
  4. rf = h5f[''snp_chr''][:]
  5. h5f.close()
  6. ind1 = np.in1d(sf[:,1],rf[:,2])
  7. ind2 = np.in1d(rf[:,2],sf[:,1])
  8. sf1 = sf[ind1]
  9. rf1 = rf[ind2]
  10. ### check order ###
  11. if sum(sf1[:,1]==rf1[:,2])==len(rf1[:,2]):
  12. print ''Good!''
  13. else:
  14. print ''Shit happens,sorting sf1 to have the same order as rf1''
  15. O1 = np.argsort(sf1[:,1])
  16. O2 = np.argsort(rf1[:,2])
  17. O3 = np.argsort(O2)
  18. sf1 = sf1[O1][O3]
  19. out = [''hg19chrc snpid a1 a2 bp or p''+''\\n'']
  20. for i in range(len(sf1[:,1])):
  21. out.append(sf1[:,0][i]+'' ''+sf1[:,1][i]+'' ''+sf1[:,2][i]+'' ''+sf1[:,3][i]+'' ''+rf1[:,5][i]+'' ''+sf1[:,6][i]+''\\n'')
  22. ff = open(out_file,"w")
  23. ff.writelines(out)
  24. ff.close()
项目:model_sweeper    作者:akimovmike    | 项目源码 | 文件源码
  1. def test_multicollinearity(df, target_name, r2_threshold = 0.89):
  2. ''''''Tests if any of the features Could be predicted from others with R2 >= 0.89
  3.  
  4. input: dataframe,name of target (to exclude)
  5.  
  6. ''''''
  7. r2s = pd.DataFrame()
  8. for feature in df.columns.difference([target_name]):
  9. model = sk.linear_model.Ridge()
  10. model.fit(df[df.columns.difference([target_name,feature])], df[feature])
  11.  
  12. pos = np.in1d(model.coef_, np.sort(model.coef_)[-5:])
  13.  
  14. r2s = r2s.append(pd.DataFrame({''r2'':sk.metrics.r2_score(df[feature],\\
  15. model.predict(df[df.columns.difference([target_name, feature])])),\\
  16. ''predictors'' : str(df.columns.difference([target_name, feature])[np.ravel(np.argwhere(pos == True))].tolist())}, index = [feature]))
  17. print(''Testing'', feature)
  18.  
  19. print(''-----------------'')
  20.  
  21. if len(r2s[r2s[''r2''] >= r2_threshold]) > 0:
  22. print(''multicollinearity detected'')
  23. print(r2s[r2s[''r2''] >= r2_threshold])
  24. else:
  25. print(''No multicollinearity'')
项目:Crossworder    作者:olety    | 项目源码 | 文件源码
  1. def __init__(self, **kwargs):
  2. logging.info(''Crossword __init__: Initializing crossword...'')
  3. logging.debug(''kwargs:'', kwargs)
  4. # Reading kwargs
  5. self.setup = kwargs
  6. self.rows = int(kwargs.get(''n'', 5))
  7. self.cols = int(kwargs.get(''m'', 5))
  8. self.words_file = str(kwargs.get(''word_file'', ''lemma.num.txt''))
  9. self.sort = bool(kwargs.get(''sort'', False))
  10. self.maximize_len = bool(kwargs.get(''maximize_len'', False))
  11. self.repeat_words = bool(kwargs.get(''repeat_words'', False))
  12. logging.debug(''Crossword __init__: n={},m={},fname={}''.format(self.rows, self.cols, self.words_file))
  13. # Loading words
  14. logging.debug(''Crossword __init__: Started loading words from {}''.format(self.words_file))
  15. arr = np.genfromtxt(self.words_file, dtype=''str'', delimiter='' '')
  16. self.words = arr[np.in1d(arr[:, 3], [''v'', ''n'', ''adv'', ''a''])][:, 2].tolist()
  17. # Number of words loaded
  18. logging.debug(''Crossword __init__: Number of words loaded: {}''.format(len(self.words)))
  19. self.words = list(set(x for x in self.words if len(x) <= self.rows and len(x) <= self.cols))
  20. if self.sort:
  21. self.words = sorted(self.words, key=len, reverse=self.maximize_len)
  22. # After filter logging
  23. logging.debug(''Crossword __init__: Number of words after filter: {},maxlen = {}''.format(len(self.words), len(
  24. max(self.words, key=len))))
项目:seniority_list    作者:rubydatasystems    | 项目源码 | 文件源码
  1. def test_df_col_or_idx_equivalence(df1,
  2. df2,
  3. col=None):
  4. ''''''check whether two dataframes contain the same elements (but not
  5. necessarily in the same order) in either the indexes or a selected column
  6.  
  7. inputs
  8. df1,df2
  9. the dataframes to check
  10. col
  11. if not None,test this dataframe column for equivalency,otherwise
  12. test the dataframe indexes
  13.  
  14. Returns True or False
  15. ''''''
  16. if not col:
  17. result = all(np.in1d(df1.index, df2.index,
  18. assume_unique=True,
  19. invert=False))
  20. else:
  21. result = all(np.in1d(df1[col], df2[col],
  22. assume_unique=False,
  23. invert=False))
  24.  
  25. return result
项目:textar    作者:datosgobar    | 项目源码 | 文件源码
  1. def make_classifier(self, name, ids, labels):
  2. """Entrenar un clasificador SVM sobre los textos cargados.
  3.  
  4. Crea un clasificador que se guarda en el objeto bajo el nombre `name`.
  5.  
  6. Args:
  7. name (str): Nombre para el clasidicador.
  8. ids (list): Se espera una lista de N ids de textos ya almacenados
  9. en el TextClassifier.
  10. labels (list): Se espera una lista de N etiquetas. Una por cada id
  11. de texto presente en ids.
  12. Nota:
  13. Usa el clasificador de `Scikit-learn <http://scikit-learn.org/>`_
  14. """
  15. if not all(np.in1d(ids, self.ids)):
  16. raise ValueError("Hay ids de textos que no se encuentran \\
  17. almacenados.")
  18. setattr(self, SGDClassifier())
  19. classifier = getattr(self, name)
  20. indices = np.searchsorted(self.ids, ids)
  21. classifier.fit(self.tfidf_mat[indices, :], labels)
项目:textar    作者:datosgobar    | 项目源码 | 文件源码
  1. def retrain(self, labels):
  2. """Reentrenar parcialmente un clasificador SVM.
  3.  
  4. Args:
  5. name (str): Nombre para el clasidicador.
  6. ids (list): Se espera una lista de N ids de textos ya almacenados
  7. en el TextClassifier.
  8. labels (list): Se espera una lista de N etiquetas. Una por cada id
  9. de texto presente en ids.
  10. Nota:
  11. Usa el clasificador de `Scikit-learn <http://scikit-learn.org/>`_
  12. """
  13. if not all(np.in1d(ids, self.ids)):
  14. raise ValueError("Hay ids de textos que no se encuentran \\
  15. almacenados.")
  16. try:
  17. classifier = getattr(self, name)
  18. except AttributeError:
  19. raise AttributeError("No hay ningun clasificador con ese nombre.")
  20. indices = np.in1d(self.ids, ids)
  21. if isinstance(labels, str):
  22. labels = [labels]
  23. classifier.partial_fit(self.tfidf_mat[indices, labels)
项目:polara    作者:Evfro    | 项目源码 | 文件源码
  1. def get_Feedback_data(self, on_level=None):
  2. Feedback = self.data.fields.Feedback
  3. eval_data = self.data.test.evalset[Feedback].values
  4. holdout = self.data.holdout_size
  5. Feedback_data = eval_data.reshape(-1, holdout)
  6.  
  7. if on_level is not None:
  8. try:
  9. iter(on_level)
  10. except TypeError:
  11. Feedback_data = np.ma.masked_not_equal(Feedback_data, on_level)
  12. else:
  13. mask_level = np.in1d(Feedback_data.ravel(),
  14. on_level,
  15. invert=True).reshape(Feedback_data.shape)
  16. Feedback_data = np.ma.masked_where(mask_level, Feedback_data)
  17. return Feedback_data
项目:texta    作者:texta-tk    | 项目源码 | 文件源码
  1. def _find_optimal_clustering(self,clusterings):
  2.  
  3. max_score = float(''-inf'')
  4. max_clustering = None
  5.  
  6. for clustering in clusterings:
  7. labeled_vectors = [(node.vector,cluster_idx) for cluster_idx in range(len(clustering)) for node in _get_cluster_nodes(clustering[cluster_idx][1]) ]
  8. vectors,labels = [np.array(x) for x in zip(*labeled_vectors)]
  9. if np.in1d([1],labels)[0]:
  10. score = silhouette_score(vectors,labels,metric=''cosine'')
  11. else:
  12. continue # silhouette doesn''t work with just one cluster
  13. if score > max_score:
  14. max_score = score
  15. max_clustering = clustering
  16.  
  17. return zip(*max_clustering)[1] if max_clustering else zip(*clusterings[0])[1]
项目:sequence-based-recommendations    作者:rdevooght    | 项目源码 | 文件源码
  1. def remove_rare_elements(data, min_user_activity, min_item_popularity):
  2. ''''''Removes user and items that appears in too few interactions.
  3. min_user_activity is the minimum number of interaction that a user should have.
  4. min_item_popularity is the minimum number of interaction that an item should have.
  5. NB: the constraint on item might not be strictly satisfied because rare users and items are removed in alternance,
  6. and the last removal of inactive users might create new rare items.
  7. ''''''
  8.  
  9. print(''Remove inactive users and rare items...'')
  10.  
  11. #Remove inactive users a first time
  12. user_activity = data.groupby(''u'').size()
  13. data = data[np.in1d(data.u, user_activity[user_activity >= min_user_activity].index)]
  14. #Remove unpopular items
  15. item_popularity = data.groupby(''i'').size()
  16. data = data[np.in1d(data.i, item_popularity[item_popularity >= min_item_popularity].index)]
  17. #Remove users that might have passed below the activity threshold due to the removal of rare items
  18. user_activity = data.groupby(''u'').size()
  19. data = data[np.in1d(data.u, user_activity[user_activity >= min_user_activity].index)]
  20.  
  21. return data
项目:l3    作者:jacobandreas    | 项目源码 | 文件源码
  1. def reconstruct_goal(world):
  2. # pdb.set_trace()
  3. world = world.copy()
  4. ## indices for grass and puddle
  5. background_inds = [obj[''index''] for (name, obj) in library.objects.iteritems() if obj[''background'']]
  6. ## background mask
  7. background = np.in1d(world, background_inds)
  8. background = background.reshape( (world.shape) )
  9. ## set backgronud to 0
  10. world[background] = 0
  11. ## subtract largest background ind
  12. ## so indices of objects begin at 1
  13. world[~background] -= max(background_inds)
  14. world = np.expand_dims(np.expand_dims(world, 0), 0)
  15. # pdb.set_trace()
  16. return world
项目:rTensor    作者:erichson    | 项目源码 | 文件源码
  1. def check_multiplication_dims(dims, N, M, vidx=False, without=False):
  2. dims = array(dims, ndmin=1)
  3. if len(dims) == 0:
  4. dims = arange(N)
  5. if without:
  6. dims = setdiff1d(range(N), dims)
  7. if not np.in1d(dims, arange(N)).all():
  8. raise ValueError(''Invalid dimensions'')
  9. P = len(dims)
  10. sidx = np.argsort(dims)
  11. sdims = dims[sidx]
  12. if vidx:
  13. if M > N:
  14. raise ValueError(''More multiplicants than dimensions'')
  15. if M != N and M != P:
  16. raise ValueError(''Invalid number of multiplicants'')
  17. if P == M:
  18. vidx = sidx
  19. else:
  20. vidx = sdims
  21. return sdims, vidx
  22. else:
  23. return sdims
项目:yt    作者:yt-project    | 项目源码 | 文件源码
  1. def particle_mask(self):
  2. # Dynamically create the masking array for particles,and get
  3. # the data using standard yt methods.
  4. if self._particle_mask is not None:
  5. return self._particle_mask
  6. # This is from disk.
  7. pid = self.__getitem__(''particle_index'')
  8. # This is from the sphere.
  9. if self._name == "RockstarHalo":
  10. ds = self.ds.sphere(self.CoM, self._radjust * self.max_radius)
  11. elif self._name == "LoadedHalo":
  12. ds = self.ds.sphere(self.CoM, np.maximum(self._radjust * \\
  13. self.ds.quan(self.max_radius, ''code_length''), \\
  14. self.ds.index.get_smallest_dx()))
  15. sp_pid = ds[''particle_index'']
  16. self._ds_sort = sp_pid.argsort()
  17. sp_pid = sp_pid[self._ds_sort]
  18. # This matches them up.
  19. self._particle_mask = np.in1d(sp_pid, pid)
  20. return self._particle_mask
项目:skggm    作者:skggm    | 项目源码 | 文件源码
  1. def has_approx_support(m, m_hat, prob=0.01):
  2. """Returns 1 if model selection error is less than or equal to prob rate,
  3. 0 else.
  4.  
  5. NOTE: why does np.nonzero/np.flatnonzero create so much problems?
  6. """
  7. m_nz = np.flatnonzero(np.triu(m, 1))
  8. m_hat_nz = np.flatnonzero(np.triu(m_hat, 1))
  9.  
  10. upper_diagonal_mask = np.flatnonzero(np.triu(np.ones(m.shape), 1))
  11. not_m_nz = np.setdiff1d(upper_diagonal_mask, m_nz)
  12.  
  13. intersection = np.in1d(m_hat_nz, m_nz) # true positives
  14. not_intersection = np.in1d(m_hat_nz, not_m_nz) # false positives
  15.  
  16. true_positive_rate = 0.0
  17. if len(m_nz):
  18. true_positive_rate = 1. * np.sum(intersection) / len(m_nz)
  19. true_negative_rate = 1. - true_positive_rate
  20.  
  21. false_positive_rate = 0.0
  22. if len(not_m_nz):
  23. false_positive_rate = 1. * np.sum(not_intersection) / len(not_m_nz)
  24.  
  25. return int(np.less_equal(true_negative_rate + false_positive_rate, prob))
项目:ottertune    作者:cmu-db    | 项目源码 | 文件源码
  1. def get_membership_mask(self, labels, rows_or_columns):
  2. from .util import array_tostring
  3.  
  4. assert rows_or_columns in [''rows'', ''columns'']
  5. assert isinstance(labels, np.ndarray)
  6. assert labels.size > 0
  7.  
  8. if rows_or_columns == "rows":
  9. filter_labels = self.rowlabels
  10. else:
  11. filter_labels = self.columnlabels
  12.  
  13. labels = array_tostring(labels)
  14. filter_labels = array_tostring(filter_labels)
  15.  
  16. return np.in1d(filter_labels.ravel(),
  17. labels).reshape(filter_labels.shape)
项目:score_card_base_python    作者:zzstrwolf    | 项目源码 | 文件源码
  1. def discrete(self, x, bin=5):
  2. #res = np.array([0] * x.shape[-1],dtype=int)
  3. #?????????????????????woe?????????????<=?woe??
  4. x_copy = pd.Series.copy(x)
  5. x_copy = x_copy.astype(str)
  6. #x_copy = x_copy.astype(np.str_)
  7. #x_copy = x
  8. x_gt0 = x[x>=0]
  9. #if x.name == ''TD_pltF_CNT_1M'':
  10. #bin = 5
  11. #x_gt0 = x[(x>=0) & (x<=24)]
  12.  
  13. for i in range(bin):
  14. point1 = stats.scoreatpercentile(x_gt0, i * (100.0/bin))
  15. point2 = stats.scoreatpercentile(x_gt0, (i + 1) * (100.0/bin))
  16. x1 = x[(x >= point1) & (x <= point2)]
  17. mask = np.in1d(x, x1)
  18. #x_copy[mask] = i + 1
  19. x_copy[mask] = ''%s-%s'' % (point1,point2)
  20. #x_copy[mask] = point1
  21. #print x_copy[mask]
  22. #print x
  23. #print x
  24. return x_copy
项目:score_card_base_python    作者:zzstrwolf    | 项目源码 | 文件源码
  1. def grade(self,dtype=int)
  2. #?????????????????????woe?????????????<=?woe??
  3. x_copy = np.copy(x)
  4. #x_copy = x_copy.astype(str)
  5. #x_copy = x_copy.astype(np.str_)
  6. #x_copy = x
  7. x_gt0 = x[x>=0]
  8.  
  9. for i in range(bin):
  10. point1 = stats.scoreatpercentile(x_gt0, x1)
  11. #x_copy[mask] = i + 1
  12. x_copy[mask] = i + 1
  13. #x_copy[mask] = point1
  14. #print x_copy[mask]
  15. #print x
  16. print point1,point2
  17. #print x
  18. return x_copy
项目:deepcpg    作者:cangermueller    | 项目源码 | 文件源码
  1. def map_values(values, pos, target_pos, dtype=None, nan=dat.CPG_NAN):
  2. """Maps `values` array at positions `pos` to `target_pos`.
  3.  
  4. Inserts `nan` for uncovered positions.
  5. """
  6. assert len(values) == len(pos)
  7. assert np.all(pos == np.sort(pos))
  8. assert np.all(target_pos == np.sort(target_pos))
  9.  
  10. values = values.ravel()
  11. pos = pos.ravel()
  12. target_pos = target_pos.ravel()
  13. idx = np.in1d(pos, target_pos)
  14. pos = pos[idx]
  15. values = values[idx]
  16. if not dtype:
  17. dtype = values.dtype
  18. target_values = np.empty(len(target_pos), dtype=dtype)
  19. target_values.fill(nan)
  20. idx = np.in1d(target_pos, pos).nonzero()[0]
  21. assert len(idx) == len(values)
  22. assert np.all(target_pos[idx] == pos)
  23. target_values[idx] = values
  24. return target_values
项目:alphacsc    作者:alphacsc    | 项目源码 | 文件源码
  1. def test_learn_codes():
  2. """Test learning of codes."""
  3. thresh = 0.25
  4.  
  5. X, ds, z = simulate_data(n_trials, n_times, n_times_atom, n_atoms)
  6.  
  7. for solver in (''l_bfgs'', ''ista'', ''fista''):
  8. z_hat = update_z(X, reg, solver=solver,
  9. solver_kwargs=dict(factr=1e11, max_iter=50))
  10.  
  11. X_hat = construct_X(z_hat, ds)
  12. assert_true(np.corrcoef(X.ravel(), X_hat.ravel())[1, 1] > 0.99)
  13. assert_true(np.max(X - X_hat) < 0.1)
  14.  
  15. # Find position of non-zero entries
  16. idx = np.ravel_multi_index(z[0].nonzero(), z[0].shape)
  17. loc_x, loc_y = np.where(z_hat[0] > thresh)
  18. # shift position by half the length of atom
  19. idx_hat = np.ravel_multi_index((loc_x, loc_y), z_hat[0].shape)
  20. # make sure that the positions are a subset of the positions
  21. # in the original z
  22. mask = np.in1d(idx_hat, idx)
  23. assert_equal(np.sum(mask), len(mask))
项目:coordinates    作者:markovmodel    | 项目源码 | 文件源码
  1. def __init__(self, topology, selstr=None, deg=False, cossin=False, periodic=True):
  2. indices = indices_phi(topology)
  3.  
  4. if not selstr:
  5. self._phi_inds = indices
  6. else:
  7. self._phi_inds = indices[np.in1d(indices[:, 1],
  8. topology.select(selstr), assume_unique=True)]
  9.  
  10. indices = indices_psi(topology)
  11. if not selstr:
  12. self._psi_inds = indices
  13. else:
  14. self._psi_inds = indices[np.in1d(indices[:, assume_unique=True)]
  15.  
  16. # alternate phi,psi pairs (phi_1,psi_1,...,phi_n,psi_n)
  17. dih_indexes = np.array(list(phi_psi for phi_psi in
  18. zip(self._phi_inds, self._psi_inds))).reshape(-1, 4)
  19.  
  20. super(BackbonetorsionFeature, self).__init__(topology, dih_indexes,
  21. deg=deg, cossin=cossin,
  22. periodic=periodic)
项目:speech_ml    作者:coopie    | 项目源码 | 文件源码
  1. def test_ttv_array_like_data_source(self):
  2. dummy_data_source = DummyDataSource()
  3. subject_info_dir = os.path.join(''test'', ''dummy_data'', ''Metadata'')
  4. ttv = yaml_to_dict(os.path.join(subject_info_dir, ''dummy_ttv.yaml''))
  5.  
  6. array_ds = TtvarrayLikeDataSource(dummy_data_source, ttv)
  7.  
  8. self.assertEqual(len(array_ds), 3)
  9.  
  10. all_values = np.fromiter((x for x in array_ds[:]), dtype=''int16'')
  11.  
  12. self.assertTrue(
  13. np.all(
  14. np.in1d(
  15. all_values,
  16. np.array([1, 2, 3])
  17. )
  18. )
  19. )
项目:hax    作者:XENON1T    | 项目源码 | 文件源码
  1. def get_data(self, dataset, event_list=None):
  2. # Load Basics for this dataset and shift it by 1
  3. data = hax.minitrees.load_single_minitree(dataset, ''Basics'')
  4. df = data.shift(1)
  5.  
  6. # Add prevIoUs_ prefix to all columns
  7. df = df.rename(columns=lambda x: ''prevIoUs_'' + x)
  8.  
  9. # Add (unshifted) event number and run number,to support merging
  10. df[''event_number''] = data[''event_number'']
  11. df[''run_number''] = data[''run_number'']
  12.  
  13. # Support for event list (lame)
  14. if event_list is not None:
  15. df = df[np.in1d(df[''event_number''].values, event_list)]
  16.  
  17. return df
项目:yt_astro_analysis    作者:yt-project    | 项目源码 | 文件源码
  1. def particle_mask(self):
  2. # Dynamically create the masking array for particles, pid)
  3. return self._particle_mask
项目:Waskom_PNAS_2017    作者:WagnerLabPapers    | 项目源码 | 文件源码
  1. def extract_from_volume(vol_data, vox_ijk):
  2. """Extract data values (broadcasting across time if relevant)."""
  3. i, j, k = vox_ijk.T
  4. ii, jj, kk = vol_data.shape[:3]
  5. fov = (np.in1d(i, np.arange(ii)) &
  6. np.in1d(j, np.arange(jj)) &
  7. np.in1d(k, np.arange(kk)))
  8.  
  9. if len(vol_data.shape) == 3:
  10. ntp = 1
  11. else:
  12. ntp = vol_data.shape[-1]
  13.  
  14. roi_data = np.empty((len(i), ntp))
  15. roi_data[:] = np.nan
  16. roi_data[fov] = vol_data[i[fov], j[fov], k[fov]]
  17. return roi_data
项目:ugali    作者:DarkEnergySurvey    | 项目源码 | 文件源码
  1. def clip_catalog(self):
  2. # ROI-specific catalog
  3. logger.debug("Clipping full catalog...")
  4. cut_observable = self.mask.restrictCatalogToObservableSpace(self.catalog_full)
  5.  
  6. # All objects within disk ROI
  7. logger.debug("Creating roi catalog...")
  8. self.catalog_roi = self.catalog_full.applyCut(cut_observable)
  9. self.catalog_roi.project(self.roi.projector)
  10. self.catalog_roi.spatialBin(self.roi)
  11.  
  12. # All objects interior to the background annulus
  13. logger.debug("Creating interior catalog...")
  14. cut_interior = numpy.in1d(ang2pix(self.config[''coords''][''nside_pixel''], self.catalog_roi.lon, self.catalog_roi.lat),
  15. self.roi.pixels_interior)
  16. #cut_interior = self.roi.inInterior(self.catalog_roi.lon,self.catalog_roi.lat)
  17. self.catalog_interior = self.catalog_roi.applyCut(cut_interior)
  18. self.catalog_interior.project(self.roi.projector)
  19. self.catalog_interior.spatialBin(self.roi)
  20.  
  21. # Set the default catalog
  22. #logger.info("Using interior ROI for likelihood calculation")
  23. self.catalog = self.catalog_interior
  24. #self.pixel_roi_cut = self.roi.pixel_interior_cut
项目:ugali    作者:DarkEnergySurvey    | 项目源码 | 文件源码
  1. def inFootprint(self, pixels, nside=None):
  2. """
  3. Open each valid filename for the set of pixels and determine the set
  4. of subpixels with valid data.
  5. """
  6. if numpy.isscalar(pixels): pixels = numpy.array([pixels])
  7. if nside is None: nside = self.nside_likelihood
  8.  
  9. inside = numpy.zeros( len(pixels), dtype=''bool'')
  10. if not self.nside_catalog:
  11. catalog_pix = [0]
  12. else:
  13. catalog_pix = superpixel(pixels,nside,self.nside_catalog)
  14. catalog_pix = numpy.intersect1d(catalog_pix,self.catalog_pixels)
  15.  
  16. for filenames in self.filenames[catalog_pix]:
  17. #logger.debug("Loading %s"%filenames[''mask_1''])
  18. subpix_1,val_1 = ugali.utils.skymap.readSparseHealpixmap(filenames[''mask_1''],''MAGLIM'',construct_map=False)
  19. #logger.debug("Loading %s"%filenames[''mask_2''])
  20. subpix_2,val_2 = ugali.utils.skymap.readSparseHealpixmap(filenames[''mask_2''],construct_map=False)
  21. subpix = numpy.intersect1d(subpix_1,subpix_2)
  22. superpix = numpy.unique(ugali.utils.skymap.superpixel(subpix,self.nside_pixel,nside))
  23. inside |= numpy.in1d(pixels, superpix)
  24.  
  25. return inside
项目:ugali    作者:DarkEnergySurvey    | 项目源码 | 文件源码
  1. def index_pixels(lon,lat,pixels,nside):
  2. """
  3. Find the index for object amoung a subset of healpix pixels.
  4. Set index of objects outside the pixel subset to -1
  5.  
  6. # ADW: Not really safe to set index = -1 (accesses last entry);
  7. # -np.inf would be better,but breaks other code...
  8. """
  9. pix = ang2pix(nside,lon,lat)
  10. # pixels should be pre-sorted,otherwise...???
  11. index = np.searchsorted(pixels,pix)
  12. if np.isscalar(index):
  13. if not np.in1d(pix,pixels).any(): index = -1
  14. else:
  15. # Find objects that are outside the roi
  16. #index[np.take(pixels,index,mode=''clip'')!=pix] = -1
  17. index[~np.in1d(pix,pixels)] = -1
  18. return index
  19.  
  20. ############################################################
项目:ugali    作者:DarkEnergySurvey    | 项目源码 | 文件源码
  1. def get(self, names=None, burn=None, clip=None):
  2. if names is None: names = list(self.dtype.names)
  3. names = np.array(names,ndmin=1)
  4.  
  5. missing = names[~np.in1d(names,self.dtype.names)]
  6. if len(missing):
  7. msg = "field(s) named %s not found"%(missing)
  8. raise ValueError(msg)
  9. #idx = np.where(np.in1d(self.dtype.names,names))[0]
  10. idx = np.array([self.dtype.names.index(n) for n in names])
  11.  
  12. # Remove zero entries
  13. zsel = ~np.all(self.ndarray==0,axis=1)
  14. # Remove burn entries
  15. bsel = np.zeros(len(self),dtype=bool)
  16. bsel[slice(burn,None)] = 1
  17.  
  18. data = self.ndarray[:,idx][bsel&zsel]
  19. if clip is not None:
  20. from astropy.stats import sigma_clip
  21. mask = sigma_clip(data,sig=clip,copy=False,axis=0).mask
  22. data = data[np.where(~mask.any(axis=1))]
  23.  
  24. return data
项目:ugali    作者:DarkEnergySurvey    | 项目源码 | 文件源码
  1. def _setup_subpix(self,nside=2**16):
  2. """
  3. Subpixels for random position generation.
  4. """
  5. # Only setup once...
  6. if hasattr(self,''subpix''): return
  7.  
  8. # Simulate over full ROI
  9. self.roi_radius = self.config[''coords''][''roi_radius'']
  10.  
  11. # Setup background spatial stuff
  12. logger.info("Setup subpixels...")
  13. self.nside_pixel = self.config[''coords''][''nside_pixel'']
  14. self.nside_subpixel = self.nside_pixel * 2**4 # Could be config parameter
  15. epsilon = np.degrees(healpy.max_pixrad(self.nside_pixel)) # Pad roi radius to cover edge healpix
  16. subpix = ugali.utils.healpix.query_disc(self.nside_subpixel,self.roi.vec,self.roi_radius+epsilon)
  17. superpix = ugali.utils.healpix.superpixel(subpix,self.nside_subpixel,self.nside_pixel)
  18. self.subpix = subpix[np.in1d(superpix,self.roi.pixels)]
项目:no_fuss_dml    作者:brotherofken    | 项目源码 | 文件源码
  1. def iterate_minibatches(self, batchsize, shuffle=True, train=True):
  2. indices = []
  3. if train:
  4. indices = np.argwhere(np.in1d(data.labels, data.train_classes))
  5. else:
  6. indices = np.argwhere(np.logical_not(np.in1d(data.labels, data.train_classes)))
  7.  
  8. if shuffle:
  9. np.random.shuffle(indices)
  10.  
  11. for start_idx in range(0, len(self.img_paths) - batchsize + 1, batchsize):
  12. excerpt = indices[start_idx:start_idx + batchsize]
  13. images = [self._load_preprocess_img(self.img_paths[int(i)]) for i in excerpt]
  14. if len(images) == batchsize:
  15. yield np.concatenate(images), np.array(self.labels[excerpt]).astype(np.int32).T
  16. else:
  17. raise stopiteration
项目:vtkInterface    作者:akaszynski    | 项目源码 | 文件源码
  1. def GetEdgeMask(self, angle):
  2. """
  3. Returns a mask of the points of a surface mesh that have a surface
  4. angle greater than angle
  5.  
  6. Parameters
  7. ----------
  8. angle : float
  9. Angle to consider an edge.
  10.  
  11. """
  12. featureEdges = vtk.vtkFeatureEdges()
  13. featureEdges.SetInputData(self)
  14. featureEdges.FeatureEdgesOn()
  15. featureEdges.BoundaryEdgesOff()
  16. featureEdges.NonManifoldEdgesOff()
  17. featureEdges.ManifoldEdgesOff()
  18. featureEdges.SetFeatureAngle(angle)
  19. featureEdges.Update()
  20. edges = featureEdges.Getoutput()
  21. origID = vtkInterface.GetPointScalars(edges, ''vtkOriginalPointIds'')
  22.  
  23. return np.in1d(self.GetPointScalars(''vtkOriginalPointIds''),
  24. origID,
  25. assume_unique=True)
项目:sims_featureScheduler    作者:lsst    | 项目源码 | 文件源码
  1. def RaDec2region(ra, dec, nside):
  2. SCP_indx, nes_indx, GP_indx, WFD_indx = mutually_exclusive_regions(nside)
  3.  
  4. indices = _raDec2Hpid(nside, np.radians(ra), np.radians(dec))
  5. result = np.empty(np.size(indices), dtype = object)
  6. SCP = np.in1d(indices, SCP_indx)
  7. nes = np.in1d(indices,nes_indx)
  8. GP = np.in1d(indices,GP_indx)
  9. WFD = np.in1d(indices,WFD_indx)
  10.  
  11. result[SCP] = ''SCP''
  12. result[nes] = ''nes''
  13. result[GP] = ''GP''
  14. result[WFD] = ''WFD''
  15.  
  16. return result
项目:loompy    作者:linnaRSSon-lab    | 项目源码 | 文件源码
  1. def __getitem__(self, thing: Any) -> sparse.coo_matrix:
  2. if type(thing) is slice or type(thing) is np.ndarray or type(thing) is int:
  3. gm = GraphManager(None, axis=self.axis)
  4. for key, g in self.items():
  5. # Slice the graph matrix properly without making it dense
  6. (a, b, w) = (g.row, g.col, g.data)
  7. indices = np.arange(g.shape[0])[thing]
  8. mask = np.logical_and(np.in1d(a, indices), np.in1d(b, indices))
  9. a = a[mask]
  10. b = b[mask]
  11. w = w[mask]
  12. d = dict(zip(np.sort(indices), np.arange(indices.shape[0])))
  13. a = np.array([d[x] for x in a])
  14. b = np.array([d[x] for x in b])
  15. gm[key] = sparse.coo_matrix((w, (a, b)), shape=(len(indices), len(indices)))
  16. return gm
  17. else:
  18. return self.__getattr__(thing)
项目:edm2016    作者:Knewton    | 项目源码 | 文件源码
  1. def get_data_by_id(self, ids):
  2. """ Helper for getting current data values from stored identifiers
  3. :param float|list ids: ids for which data are requested
  4. :return: the stored ids
  5. :rtype: np.ndarray
  6. """
  7. if self.ids is None:
  8. raise ValueError("IDs not stored in node {}".format(self.name))
  9. if self.data is None:
  10. raise ValueError("No data in node {}".format(self.name))
  11. ids = np.array(ids, ndmin=1, copy=False)
  12. found_items = np.in1d(ids, self.ids)
  13. if not np.all(found_items):
  14. raise ValueError("Cannot find {} among {}".format(ids[np.logical_not(found_items)],
  15. self.name))
  16. idx = np.empty(len(ids), dtype=''int'')
  17. for k, this_id in enumerate(ids):
  18. if self.ids.ndim > 1:
  19. idx[k] = np.flatnonzero(np.all(self.ids == this_id, axis=1))[0]
  20. else:
  21. idx[k] = np.flatnonzero(self.ids == this_id)[0]
  22. return np.array(self.data, ndmin=1)[idx]
项目:edm2016    作者:Knewton    | 项目源码 | 文件源码
  1. def split_data(data, num_folds, seed=0):
  2. """ Split all interactions into K-fold sets of training and test dataframes. Splitting is done
  3. by assigning student ids to the training or test sets.
  4.  
  5. :param pd.DataFrame data: all interactions
  6. :param int num_folds: number of folds
  7. :param int seed: seed for the splitting
  8. :return: a generator over (train dataframe,test dataframe) tuples
  9. :rtype: generator[(pd.DataFrame,pd.DataFrame)]
  10. """
  11. # break up students into folds
  12. fold_student_idx = _get_fold_student_idx(np.unique(data[USER_IDX_KEY]), num_folds=num_folds,
  13. seed=seed)
  14.  
  15. for fold_test_student_idx in fold_student_idx:
  16. test_idx = np.in1d(data[USER_IDX_KEY], fold_test_student_idx)
  17. train_idx = np.logical_not(test_idx)
  18. yield (data[train_idx].copy(), data[test_idx].copy())
项目:low-shot-shrink-hallucinate    作者:facebookresearch    | 项目源码 | 文件源码
  1. def eval_loop(data_loader, model, base_classes, novel_classes):
  2. model = model.eval()
  3. top1 = None
  4. top5 = None
  5. all_labels = None
  6. for i, (x,y) in enumerate(data_loader):
  7. x = Variable(x.cuda())
  8. scores = model(x)
  9. top1_this, top5_this = perelement_accuracy(scores.data, y)
  10. top1 = top1_this if top1 is None else np.concatenate((top1, top1_this))
  11. top5 = top5_this if top5 is None else np.concatenate((top5, top5_this))
  12. all_labels = y.numpy() if all_labels is None else np.concatenate((all_labels, y.numpy()))
  13.  
  14. is_novel = np.in1d(all_labels, novel_classes)
  15. is_base = np.in1d(all_labels, base_classes)
  16. is_either = is_novel | is_base
  17. top1_novel = np.mean(top1[is_novel])
  18. top1_base = np.mean(top1[is_base])
  19. top1_all = np.mean(top1[is_either])
  20. top5_novel = np.mean(top5[is_novel])
  21. top5_base = np.mean(top5[is_base])
  22. top5_all = np.mean(top5[is_either])
  23. return np.array([top1_novel, top5_novel, top1_base, top5_base, top1_all, top5_all])
项目:Parallel-SGD    作者:angadgill    | 项目源码 | 文件源码
  1. def _mask_edges_weights(mask, edges, weights=None):
  2. """Apply a mask to edges (weighted or not)"""
  3. inds = np.arange(mask.size)
  4. inds = inds[mask.ravel()]
  5. ind_mask = np.logical_and(np.in1d(edges[0], inds),
  6. np.in1d(edges[1], inds))
  7. edges = edges[:, ind_mask]
  8. if weights is not None:
  9. weights = weights[ind_mask]
  10. if len(edges.ravel()):
  11. maxval = edges.max()
  12. else:
  13. maxval = 0
  14. order = np.searchsorted(np.unique(edges.ravel()), np.arange(maxval + 1))
  15. edges = order[edges]
  16. if weights is None:
  17. return edges
  18. else:
  19. return edges, weights
项目:segmentator    作者:ofgulban    | 项目源码 | 文件源码
  1. def map_2D_hist_to_ima(imaSlc2volHistMap, volHistMask):
  2. """Volume histogram to image mapping for slices (uses np.ind1).
  3.  
  4. Parameters
  5. ----------
  6. imaSlc2volHistMap : Todo
  7. volHistMask : Todo
  8.  
  9. Returns
  10. -------
  11. imaSlcMask : Todo
  12.  
  13. """
  14. imaSlcMask = np.zeros(imaSlc2volHistMap.flatten().shape)
  15. idxUnique = np.unique(volHistMask)
  16. for idx in idxUnique:
  17. linIndices = np.where(volHistMask.flatten() == idx)[0]
  18. # return logical array with length equal to nr of voxels
  19. voxMask = np.in1d(imaSlc2volHistMap.flatten(), linIndices)
  20. # reset mask and apply logical indexing
  21. imaSlcMask[voxMask] = idx
  22. imaSlcMask = imaSlcMask.reshape(imaSlc2volHistMap.shape)
  23. return imaSlcMask
项目:spatial-reasoning    作者:JannerM    | 项目源码 | 文件源码
  1. def reconstruct_goal(world):
  2. # pdb.set_trace()
  3. world = world.copy()
  4. ## indices for grass and puddle
  5. background_inds = [obj[''index''] for (name, 0)
  6. # pdb.set_trace()
  7. return world
项目:orange3-geo    作者:biolab    | 项目源码 | 文件源码
  1. def detect_input(cls, values, sample_size=200):
  2. """
  3. Return first "from_" method that in more than 50% matches values,
  4. or None.
  5. """
  6. assert isinstance(values, pd.Series)
  7. values = values.drop_duplicates().dropna()
  8. if len(values) > sample_size:
  9. values = values.sample(sample_size)
  10. strlen = values.str.len().dropna().unique()
  11. for method, *cond in ((cls.from_cc2, len(strlen) == 1 and strlen[0] == 2),
  12. (cls.from_cc3, len(strlen) == 1 and strlen[0] == 3),
  13. (cls.from_cc_name,),
  14. (cls.from_us_state,
  15. (cls.from_city_eu,
  16. (cls.from_city_us,
  17. (cls.from_city_world,
  18. (cls.from_region,
  19. (cls.from_fips,
  20. (cls.from_hasc, np.in1d(strlen, [2, 5, 8]).all())):
  21. if cond and not cond[0]:
  22. continue
  23. if sum(map(bool, method(values))) >= len(values) / 2:
  24. return method
  25. return None
项目:xarray-simlab    作者:benbovy    | 项目源码 | 文件源码
  1. def init_snapshots(self):
  2. """Initialize snapshots for model variables given in attributes of
  3. Dataset.
  4. """
  5. self.snapshot_vars = self.dataset.xsimlab.snapshot_vars
  6.  
  7. self.snapshot_values = {}
  8. for vars in self.snapshot_vars.values():
  9. self.snapshot_values.update({v: [] for v in vars})
  10.  
  11. self.snapshot_save = {
  12. clock: np.in1d(self.dataset[self.master_clock_dim].values,
  13. self.dataset[clock].values)
  14. for clock in self.snapshot_vars if clock is not None
  15. }
项目:SNPmatch    作者:Gregor-Mendel-Institute    | 项目源码 | 文件源码
  1. def crossGenotypeWindows(commonSNPsCHR, commonSNPsPOS, snpsP1, snpsP2, inFile, binLen, outFile, logDebug = True):
  2. ## inFile are the SNPs of the sample
  3. (snpCHR, snpPOS, snpGT, snpWEI, DPmean) = snpmatch.parseInput(inFile = inFile, logDebug = logDebug)
  4. # identifying the segregating SNPs between the accessions
  5. # only selecting 0 or 1
  6. segSNPsind = np.where((snpsP1 != snpsP2) & (snpsP1 >= 0) & (snpsP2 >= 0) & (snpsP1 < 2) & (snpsP2 < 2))[0]
  7. log.info("number of segregating snps between parents: %s", len(segSNPsind))
  8. (ChrBins, PosBins) = getBinsSNPs(commonSNPsCHR, binLen)
  9. log.info("number of bins: %s", len(ChrBins))
  10. outfile = open(outFile, ''w'')
  11. for i in range(len(PosBins)):
  12. start = np.sum(PosBins[0:i])
  13. end = start + PosBins[i]
  14. # first snp positions which are segregating and are in this window
  15. reqPOSind = segSNPsind[np.where((segSNPsind < end) & (segSNPsind >= start))[0]]
  16. reqPOS = commonSNPsPOS[reqPOSind]
  17. perchrTarPosind = np.where(snpCHR == ChrBins[i])[0]
  18. perchrTarPos = snpPOS[perchrTarPosind]
  19. matchedAccInd = reqPOSind[np.where(np.in1d(reqPOS, perchrTarPos))[0]]
  20. matchedTarInd = perchrTarPosind[np.where(np.in1d(perchrTarPos, reqPOS))[0]]
  21. matchedTarGTs = snpGT[matchedTarInd]
  22. try:
  23. TarGTBinary = snpmatch.parseGT(matchedTarGTs)
  24. TarGTBinary[np.where(TarGTBinary == 2)[0]] = 4
  25. genP1 = np.subtract(TarGTBinary, snpsP1[matchedAccInd])
  26. genP1no = len(np.where(genP1 == 0)[0])
  27. (geno, pval) = getwindowGenotype(genP1no, len(genP1))
  28. outfile.write("%s\\t%s\\t%s\\t%s\\t%s\\n" % (i+1, genP1no, len(genP1), geno, pval))
  29. except:
  30. outfile.write("%s\\tNA\\tNA\\tNA\\tNA\\n" % (i+1))
  31. if i % 40 == 0:
  32. log.info("progress: %s windows", i+10)
  33. log.info("done!")
  34. outfile.close()
项目:sourcetracker2    作者:biota    | 项目源码 | 文件源码
  1. def intersect_and_sort_samples(sample_Metadata, feature_table):
  2. ''''''Return input tables retaining only shared samples,row order equivalent.
  3.  
  4. Parameters
  5. ----------
  6. sample_Metadata : pd.DataFrame
  7. Contingency table with rows,columns = samples,Metadata.
  8. feature_table : pd.DataFrame
  9. Contingency table with rows,features.
  10.  
  11. Returns
  12. -------
  13. sample_Metadata,feature_table : pd.DataFrame,pd.DataFrame
  14. Input tables with unshared samples removed and ordered equivalently.
  15.  
  16. Raises
  17. ------
  18. ValueError
  19. If no shared samples are found.
  20. ''''''
  21. shared_samples = np.intersect1d(sample_Metadata.index, feature_table.index)
  22. if shared_samples.size == 0:
  23. raise ValueError(''There are no shared samples between the feature ''
  24. ''table and the sample Metadata. Ensure that you have ''
  25. ''passed the correct files.'')
  26. elif (shared_samples.size == sample_Metadata.shape[0] ==
  27. feature_table.shape[0]):
  28. s_Metadata = sample_Metadata.copy()
  29. s_features = feature_table.copy()
  30. else:
  31. s_Metadata = sample_Metadata.loc[np.in1d(sample_Metadata.index,
  32. shared_samples), :].copy()
  33. s_features = feature_table.loc[np.in1d(feature_table.index,
  34. shared_samples), :].copy()
  35. return s_Metadata, s_features.loc[s_Metadata.index, :]
项目:ga-reader    作者:bdhingra    | 项目源码 | 文件源码
  1. def prepare_input(d,q):
  2. f = np.zeros(d.shape[:2]).astype(''int32'')
  3. for i in range(d.shape[0]):
  4. f[i,:] = np.in1d(d[i,:,0],q[i,0])
  5. return f
项目:Modeling-Cloth    作者:the3dadvantage    | 项目源码 | 文件源码
  1. def get_piece_bool(num, dict):
  2. ''''''Uses a vertex number to find the right bool array
  3. as created by divide_garment()''''''
  4. count = 0
  5. nums = dict[''garment_pieces''][''numbers_array'']
  6. for i in nums:
  7. if np.in1d(num, i):
  8. return count
  9. count += 1
项目:Modeling-Cloth    作者:the3dadvantage    | 项目源码 | 文件源码
  1. def find_linked(ob, vert, per_face=''empty''):
  2. ''''''Takes a vert and returns an array of linked face indices''''''
  3. the_coffee_is_hot = True
  4. fidx = np.arange(len(ob.data.polygons))
  5. eidx = np.arange(len(ob.data.edges))
  6. f_set = np.array([])
  7. e_set = np.array([])
  8. verts = ob.data.vertices
  9. verts[vert].select = True
  10. v_p_f_count = [len(p.vertices) for p in ob.data.polygons]
  11. max_count = np.max(v_p_f_count)
  12. if per_face == ''empty'':
  13. per_face = [[i for i in poly.vertices] for poly in ob.data.polygons]
  14. for i in per_face:
  15. for j in range(max_count-len(i)):
  16. i.append(i[0])
  17. verts_per_face = np.array(per_face)
  18. vert=np.array([vert])
  19.  
  20. while the_coffee_is_hot:
  21. booly = np.any(np.in1d(verts_per_face, vert).reshape(verts_per_face.shape), axis=1)
  22. f_set = np.append(f_set, fidx[booly])
  23. new_verts = verts_per_face[booly].ravel()
  24. if len(new_verts) == 0:
  25. return np.array(f_set, dtype=np.int64)
  26.  
  27. cull = np.in1d(new_verts, vert)
  28. vert = new_verts[-cull]
  29. verts_per_face = verts_per_face[-booly]
  30. fidx = fidx[-booly]
项目:Modeling-Cloth    作者:the3dadvantage    | 项目源码 | 文件源码
  1. def divide_garment(ob, dict):
  2. ''''''Creates a set of bool arrays and a set of number arrays
  3. for indexing a sub set of the uv coords. The nuber arrays can
  4. be used to look up wich bool array to use based on a vertex number''''''
  5. if ob == ''empty'':
  6. ob = bpy.context.object
  7. #-----------------------------------
  8. v_count = len(ob.data.vertices)
  9. idx = np.arange(v_count)
  10. full_set = np.array([])
  11. dict[''islands''] = []
  12. v_list = [[i for i in poly.vertices] for poly in ob.data.polygons]
  13. v_in_faces = np.hstack(v_list)
  14. dict[''v_in_faces''] = v_in_faces
  15. remaining = [1]
  16. vert = 0
  17. while len(remaining) > 0:
  18. linked = find_linked(ob, v_list)
  19. selected = np.unique(np.hstack(np.array(v_list)[linked]).ravel())
  20. dict[''islands''].append(selected)
  21. full_set = np.append(full_set, selected)
  22. remain_bool = np.in1d(idx, full_set, invert=True)
  23. remaining = idx[remain_bool]
  24. if len(remaining) == 0:
  25. break
  26. vert = remaining[0]
  27. #################################
项目:radar    作者:amoose136    | 项目源码 | 文件源码
  1. def setdiff1d(ar1, ar2, assume_unique=False):
  2. """
  3. Find the set difference of two arrays.
  4.  
  5. Return the sorted,unique values in `ar1` that are not in `ar2`.
  6.  
  7. Parameters
  8. ----------
  9. ar1 : array_like
  10. Input array.
  11. ar2 : array_like
  12. Input comparison array.
  13. assume_unique : bool
  14. If True,the input arrays are both assumed to be unique,which
  15. can speed up the calculation. Default is False.
  16.  
  17. Returns
  18. -------
  19. setdiff1d : ndarray
  20. Sorted 1D array of values in `ar1` that are not in `ar2`.
  21.  
  22. See Also
  23. --------
  24. numpy.lib.arraysetops : Module with a number of other functions for
  25. performing set operations on arrays.
  26.  
  27. Examples
  28. --------
  29. >>> a = np.array([1,2,3,4,1])
  30. >>> b = np.array([3,5,6])
  31. >>> np.setdiff1d(a,b)
  32. array([1,2])
  33.  
  34. """
  35. if assume_unique:
  36. ar1 = np.asarray(ar1).ravel()
  37. else:
  38. ar1 = unique(ar1)
  39. ar2 = unique(ar2)
  40. return ar1[in1d(ar1, assume_unique=True, invert=True)]
项目:seniority_list    作者:rubydatasystems    | 项目源码 | 文件源码
  1. def set_snapshot_weights(ratio_dict,
  2. orig_rng,
  3. eg_range):
  4. ''''''Determine the job distribution ratios to carry forward during
  5. the ratio condition application period using actual jobs held ratios.
  6. likely called at implementation month by main job assignment function
  7. Count the number of jobs held by each of the ratio groups for each of the
  8. affected job level numbers. Set the weightings in the distribute function
  9. accordingly.
  10. inputs
  11. ratio_dict (dictionary)
  12. dictionary containing job levels as keys and ratio groups,
  13. weightings,month_start and month end as values.
  14. orig_rng (numpy array)
  15. month slice of original job array
  16. eg_range (numpy array)
  17. month slice of employee group code array
  18. ''''''
  19. ratio_dict = copy.deepcopy(ratio_dict)
  20. job_nums = list(ratio_dict.keys())
  21. for job in job_nums:
  22. wgt_list = []
  23. for ratio_group in ratio_dict[job][0]:
  24. wgt_list.append(np.count_nonzero((orig_rng == job) &
  25. (np.in1d(eg_range, ratio_group))))
  26. ratio_dict[job][1] = tuple(wgt_list)
  27.  
  28. return ratio_dict
  29.  
  30.  
  31. # ASSIGN JOBS BY RATIO CONDITION

Jupyter 中的 Numpy 在打印时出错(Python 版本 3.8.8):TypeError: 'numpy.ndarray' object is not callable

Jupyter 中的 Numpy 在打印时出错(Python 版本 3.8.8):TypeError: 'numpy.ndarray' object is not callable

如何解决Jupyter 中的 Numpy 在打印时出错(Python 版本 3.8.8):TypeError: ''numpy.ndarray'' object is not callable?

晚安, 尝试打印以下内容时,我在 jupyter 中遇到了 numpy 问题,并且得到了一个 错误: 需要注意的是python版本是3.8.8。 我先用 spyder 测试它,它运行正确,它给了我预期的结果

使用 Spyder:

import numpy as np
    for i in range (5):
        n = np.random.rand ()
    print (n)
Results
0.6604903457995978
0.8236300859753154
0.16067650689842816
0.6967868357083673
0.4231597934445466

现在有了 jupyter

import numpy as np
    for i in range (5):
        n = np.random.rand ()
    print (n)
-------------------------------------------------- ------
TypeError Traceback (most recent call last)
<ipython-input-78-0c6a801b3ea9> in <module>
       2 for i in range (5):
       3 n = np.random.rand ()
---->  4 print (n)

       TypeError: ''numpy.ndarray'' object is not callable

感谢您对我如何在 Jupyter 中解决此问题的帮助。

非常感谢您抽出宝贵时间。

阿特,约翰”

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)

Numpy .in1d 方法无法正确评估数组与数组视图?

Numpy .in1d 方法无法正确评估数组与数组视图?

如何解决Numpy .in1d 方法无法正确评估数组与数组视图?

我正在尝试搜索并查看一个 numpy 数组是否在另一个数组中以进行调试。

  1. #Pattern
  2. arr1 = np.array([1.62434536,-0.61175641,-0.52817175])
  3. #type : np.ndarray
  4. #dtype : ''float64''
  5. #shape : (3,)

然后我有一个 list 元组,其中每个元组中的第一个元素是一个 n × m ndarray 假设这个对象叫做“my_nest”

  1. arr2 = my_nest[0][0][0][0:3]
  2. arr2
  3. #array([ 1.62434536,)

但是使用 in1d 方法返回一个不直观的结果

  1. np.in1d(arr1,arr2)
  2. #array([False,False,False],dtype=bool)

我知道切片 ndarray 会创建对象在内存中的视图,但我什至尝试将 np.copy 包裹在它周围以在内存中创建一个新对象,然后进行比较,我仍然得到 False。

有人知道这里发生了什么吗?

解决方法

正如评论中提到的,这是浮点精度的影响。您可以使用 source for small arrays 而不是 in1d 根据其 isclose 重新实现 ==

  1. import numpy as np
  2. arr1 = np.array([1.62434536,-0.61175641,-0.52817175])
  3. arr2 = np.array([1.62434536,-0.52817175+1e-12])
  4. print(arr1)
  5. print(arr2)
  6. print(''isin: '',np.in1d(arr1,arr2))
  7. mask = np.zeros(len(arr1),dtype=bool)
  8. for a in arr2:
  9. mask |= np.isclose(arr1,a)
  10. print(''isclose:'',mask)

输出:

  1. [ 1.62434536 -0.61175641 -0.52817175]
  2. [ 1.62434536 -0.61175641 -0.52817175]
  3. isin: [ True True False]
  4. isclose: [ True True True]

numpy.random.random & numpy.ndarray.astype & numpy.arange

numpy.random.random & numpy.ndarray.astype & numpy.arange

今天看到这样一句代码:

xb = np.random.random((nb, d)).astype(''float32'') #创建一个二维随机数矩阵(nb行d列)
xb[:, 0] += np.arange(nb) / 1000. #将矩阵第一列的每个数加上一个值

要理解这两句代码需要理解三个函数

1、生成随机数

numpy.random.random(size=None) 

size为None时,返回float。

size不为None时,返回numpy.ndarray。例如numpy.random.random((1,2)),返回1行2列的numpy数组

 

2、对numpy数组中每一个元素进行类型转换

numpy.ndarray.astype(dtype)

返回numpy.ndarray。例如 numpy.array([1, 2, 2.5]).astype(int),返回numpy数组 [1, 2, 2]

 

3、获取等差数列

numpy.arange([start,]stop,[step,]dtype=None)

功能类似python中自带的range()和numpy中的numpy.linspace

返回numpy数组。例如numpy.arange(3),返回numpy数组[0, 1, 2]

numpy.ravel()/numpy.flatten()/numpy.squeeze()

numpy.ravel()/numpy.flatten()/numpy.squeeze()

numpy.ravel(a, order=''C'')

  Return a flattened array

numpy.chararray.flatten(order=''C'')

  Return a copy of the array collapsed into one dimension

numpy.squeeze(a, axis=None)

  Remove single-dimensional entries from the shape of an array.

 

相同点: 将多维数组 降为 一维数组

不同点:

  ravel() 返回的是视图(view),意味着改变元素的值会影响原始数组元素的值;

  flatten() 返回的是拷贝,意味着改变元素的值不会影响原始数组;

  squeeze()返回的是视图(view),仅仅是将shape中dimension为1的维度去掉;

 

ravel()示例:

 1 import matplotlib.pyplot as plt
 2 import numpy as np
 3 
 4 def log_type(name,arr):
 5     print("数组{}的大小:{}".format(name,arr.size))
 6     print("数组{}的维度:{}".format(name,arr.shape))
 7     print("数组{}的维度:{}".format(name,arr.ndim))
 8     print("数组{}元素的数据类型:{}".format(name,arr.dtype))
 9     #print("数组:{}".format(arr.data))
10     
11 a = np.floor(10*np.random.random((3,4)))
12 print(a)
13 log_type(''a'',a)
14 
15 a1 = a.ravel()
16 print("a1:{}".format(a1))
17 log_type(''a1'',a1)
18 a1[2] = 100
19 
20 print(a)
21 log_type(''a'',a)

 

flatten()示例

 1 import matplotlib.pyplot as plt
 2 import numpy as np
 3 
 4 def log_type(name,arr):
 5     print("数组{}的大小:{}".format(name,arr.size))
 6     print("数组{}的维度:{}".format(name,arr.shape))
 7     print("数组{}的维度:{}".format(name,arr.ndim))
 8     print("数组{}元素的数据类型:{}".format(name,arr.dtype))
 9     #print("数组:{}".format(arr.data))
10     
11 a = np.floor(10*np.random.random((3,4)))
12 print(a)
13 log_type(''a'',a)
14 
15 a1 = a.flatten()
16 print("修改前a1:{}".format(a1))
17 log_type(''a1'',a1)
18 a1[2] = 100
19 print("修改后a1:{}".format(a1))
20 
21 print("a:{}".format(a))
22 log_type(''a'',a)

 

squeeze()示例:

1. 没有single-dimensional entries的情况

 1 import matplotlib.pyplot as plt
 2 import numpy as np
 3 
 4 def log_type(name,arr):
 5     print("数组{}的大小:{}".format(name,arr.size))
 6     print("数组{}的维度:{}".format(name,arr.shape))
 7     print("数组{}的维度:{}".format(name,arr.ndim))
 8     print("数组{}元素的数据类型:{}".format(name,arr.dtype))
 9     #print("数组:{}".format(arr.data))
10     
11 a = np.floor(10*np.random.random((3,4)))
12 print(a)
13 log_type(''a'',a)
14 
15 a1 = a.squeeze()
16 print("修改前a1:{}".format(a1))
17 log_type(''a1'',a1)
18 a1[2] = 100
19 print("修改后a1:{}".format(a1))
20 
21 print("a:{}".format(a))
22 log_type(''a'',a)

从结果中可以看到,当没有single-dimensional entries时,squeeze()返回额数组对象是一个view,而不是copy。

 

2. 有single-dimentional entries 的情况

 1 import matplotlib.pyplot as plt
 2 import numpy as np
 3 
 4 def log_type(name,arr):
 5     print("数组{}的大小:{}".format(name,arr.size))
 6     print("数组{}的维度:{}".format(name,arr.shape))
 7     print("数组{}的维度:{}".format(name,arr.ndim))
 8     print("数组{}元素的数据类型:{}".format(name,arr.dtype))
 9     #print("数组:{}".format(arr.data))
10 
11 a = np.floor(10*np.random.random((1,3,4)))
12 print(a)
13 log_type(''a'',a)
14 
15 a1 = a.squeeze()
16 print("修改前a1:{}".format(a1))
17 log_type(''a1'',a1)
18 a1[2] = 100
19 print("修改后a1:{}".format(a1))
20 
21 print("a:{}".format(a))
22 log_type(''a'',a)

 

今天关于Python numpy 模块-in1d() 实例源码python中numpy模块的讲解已经结束,谢谢您的阅读,如果想了解更多关于Jupyter 中的 Numpy 在打印时出错(Python 版本 3.8.8):TypeError: 'numpy.ndarray' object is not callable、Numpy .in1d 方法无法正确评估数组与数组视图?、numpy.random.random & numpy.ndarray.astype & numpy.arange、numpy.ravel()/numpy.flatten()/numpy.squeeze()的相关知识,请在本站搜索。

本文标签: