如果您对使用@Document的mongodb多租户游戏和mongodb多租户感兴趣,那么这篇文章一定是您不可错过的。我们将详细讲解使用@Document的mongodb多租户游戏的各种细节,并对mo
如果您对使用@Document的mongodb多租户游戏和mongodb 多租户感兴趣,那么这篇文章一定是您不可错过的。我们将详细讲解使用@Document的mongodb多租户游戏的各种细节,并对mongodb 多租户进行深入的分析,此外还有关于@Document批注中的MongoDB和SpEL表达式、Amazon Document DB中的Mongodb聚合功能 插入文档查询输出、aws documentdb 是否验证双向 ssl 的 mongodb 客户端证书?、com.mongodb.client.model.ReturnDocument的实例源码的实用技巧。
本文目录一览:- 使用@Document的mongodb多租户游戏(mongodb 多租户)
- @Document批注中的MongoDB和SpEL表达式
- Amazon Document DB中的Mongodb聚合功能 插入文档查询输出
- aws documentdb 是否验证双向 ssl 的 mongodb 客户端证书?
- com.mongodb.client.model.ReturnDocument的实例源码
使用@Document的mongodb多租户游戏(mongodb 多租户)
这与@Document批注中的MongoDB和SpEL表达式有关
这就是我创建mongo模板的方式
@Beanpublic MongoDbFactory mongoDbFactory() throws UnknownHostException { String dbname = getCustid(); return new SimpleMongoDbFactory(new MongoClient("localhost"), "mydb");}@BeanMongoTemplate mongoTemplate() throws UnknownHostException { MappingMongoConverter converter = new MappingMongoConverter(mongoDbFactory(), new MongoMappingContext()); return new MongoTemplate(mongoDbFactory(), converter);}
我有一个租户提供者类
@Component("tenantProvider")public class TenantProvider { public String getTenantId() { --custome Thread local logic for getting a name }}
还有我的域名课程
@Document(collection = "#{@tenantProvider.getTenantId()}_device") public class Device { -- my fields here }
如您所见,我已经按照帖子中的指定创建了mongotemplate,但是仍然出现以下错误
线程“主”中的异常org.springframework.expression.spel.SpelEvaluationException:EL1057E
:(位置1):没有在上下文中注册任何bean解析器来解析对bean’tenantProvider’的访问
我究竟做错了什么?
答案1
小编典典终于弄清楚了为什么我遇到这个问题。
使用Servlet 3初始化时,请确保将应用程序上下文添加到mongo上下文中,如下所示
@Autowiredprivate ApplicationContext appContext;public MongoDbFactory mongoDbFactory() throws UnknownHostException { return new SimpleMongoDbFactory(new MongoClient("localhost"), "apollo-mongodb");}@BeanMongoTemplate mongoTemplate() throws UnknownHostException { final MongoDbFactory factory = mongoDbFactory(); final MongoMappingContext mongoMappingContext = new MongoMappingContext(); mongoMappingContext.setApplicationContext(appContext); // Learned from web, prevents Spring from including the _class attribute final MappingMongoConverter converter = new MappingMongoConverter(factory, mongoMappingContext); converter.setTypeMapper(new DefaultMongoTypeMapper(null)); return new MongoTemplate(factory, converter);}
检查上下文的自动装配以及 mongoMappingContext.setApplicationContext(appContext);
通过这两行,我能够正确连接组件以在多租户模式下使用它
@Document批注中的MongoDB和SpEL表达式
我正在尝试使用SpEL根据我定义的一些规则将同一文档加载到不同的集合中。
因此,从我所拥有的开始:
-首先文件:
@Document(collection = "#{@mySpecialProvider.getTargetCollectionName()}")
public class MongoDocument {
// some random fields go in
}
-秒我有应该提供集合名称的提供者bean:
@Component("mySpecialProvider")
public class MySpecialProvider {
public String getTargetCollectionName() {
// Thread local magic goes in bellow
String targetCollectionName = (String) RequestLocalContext.getFromLocalContext("targetCollectionName");
if (targetCollectionName == null) {
targetCollectionName = "defaultCollection";
}
return targetCollectionName;
}
}
问题是,当我尝试将文档插入应由提供程序生成的特定集合中时,我得到以下stacktrace:
org.springframework.expression.spel.SpelEvaluationException:EL1057E
:(位置1):在上下文中未注册任何用于解析对bean’mySpecialProvider’的访问的bean解析器
我也尝试制作Spring组件 ApplicationContextAware, 但还是没有运气。
Amazon Document DB中的Mongodb聚合功能 插入文档查询输出
我正在研究为什么此查询无法针对Amazon DocumentDB使用。同时,这是一个重写的查询,可在Amazon DocumentDB上运行,并且应执行原始查询试图执行的操作:
插入文档
db.coll.insert({
"item": "journal","instock": [{
"warehouse": "A","qty": [2,4,5]
},{
"warehouse": "C","qty": [8,5,2]
},{
"warehouse": "F","qty": [3]
},{
"warehouse": "K",8]
},{
"warehouse": "P","qty": [3,7,9]
}]
});
查询
db.coll.aggregate([
{ $match: { "item": "journal" } },{ $unwind: "$instock" },{ $match: { "instock.qty": 5 } },{
"$group": {
"_id": { "id": "$id","item": "$item" },"instock": {
"$push": {
"warehouse": "$instock.warehouse","qty": "$instock.qty"
}
}
}
}]).pretty()
输出
{
"_id" : {
"item" : "journal"
},"instock" : [
{
"warehouse" : "A","qty" : [
2.0,4.0,5.0
]
},{
"warehouse" : "C","qty" : [
8.0,5.0,2.0
]
}
]
}
aws documentdb 是否验证双向 ssl 的 mongodb 客户端证书?
如何解决aws documentdb 是否验证双向 ssl 的 mongodb 客户端证书??
我们如何创建客户端证书以通过 aws 文档 db 进行验证?在 aws 文档 https://docs.aws.amazon.com/documentdb/latest/developerguide/connect_programmatically.html#connect_programmatically-tls_enabled 中,只提到了一种方式 ssl,即客户端身份验证服务器证书。我没有找到有关两种方式的信息aws 文档 db 支持 ssl。有人可以帮忙吗?
解决方法
Amazon DocumentDB 不支持使用客户端证书连接到您的集群。您是否正在寻找服务器用于身份验证的客户端证书? Amazon DocumentDB 仅支持基于 SCRAM 的身份验证。
com.mongodb.client.model.ReturnDocument的实例源码
public O2MSyncDataLoader getPendingDataLoader() { O2MSyncDataLoader loader = null; Document document = syncEventDoc.findOneAndUpdate( Filters.and(Filters.eq(SyncAttrs.STATUS,SyncStatus.PENDING),Filters.eq(SyncAttrs.EVENT_TYPE,String.valueOf(EventType.System))),Updates.set(SyncAttrs.STATUS,SyncStatus.IN_PROGRESS),new FindOneAndUpdateOptions().returnDocument(ReturnDocument.AFTER) .projection(Projections.include(SyncAttrs.soURCE_DB_NAME,SyncAttrs.soURCE_USER_NAME))); if (document != null && !document.isEmpty()) { Object interval = document.get(SyncAttrs.INTERVAL); String appName = document.getString(SyncAttrs.APPLICATION_NAME); if(interval!=null && interval instanceof Long){ loader = new O2MSyncDataLoader((Long)interval,appName); }else{ loader = new O2MSyncDataLoader(120000,appName); } loader.setEventId(document.getobjectId(SyncAttrs.ID)); loader.setdbname(document.getString(SyncAttrs.soURCE_DB_NAME)); loader.setDbUserName(document.getString(SyncAttrs.soURCE_USER_NAME)); loader.setStatus(document.getString(SyncAttrs.STATUS)); } return loader; }
@Override public Dataset write(Dataset dataset) { // we populate this on first write and retain it thereafter if (isBlank(dataset.getId())) { dataset.setId(ObjectId.get().toString()); } Observable<Document> observable = getCollection() .findOneAndReplace( Filters.eq("id",dataset.getId()),documentTransformer.transform(dataset),new FindOneAndReplaceOptions().upsert(true).returnDocument(ReturnDocument.AFTER)); return documentTransformer.transform(Dataset.class,observable.toBlocking().single()); }
@Override public Optional<T> peek() { final Bson peekQuery = QueryUtil.generatePeekQuery(defaultHeartbeatExpirationMillis); final Document update = new Document(); update.put("heartbeat",new Date()); update.put("status",OkraStatus.PROCESSING.name()); final FindOneAndUpdateOptions options = new FindOneAndUpdateOptions(); options.returnDocument(ReturnDocument.AFTER); final Document document = client .getDatabase(getDatabase()) .getCollection(getCollection()) .findOneAndUpdate(peekQuery,new Document("$set",update),options); if (document == null) { return Optional.empty(); } return Optional.ofNullable(serializer.fromDocument(scheduleItemClass,document)); }
@Test public void updatePojotest() { Bson update = combine(set("user","Jim"),set("action",Action.DELETE),// unfortunately at this point we need to provide a non generic class,so the codec is able to determine all types // remember: type erasure makes it impossible to retrieve type argument values at runtime // @todo provide a mechanism to generate non-generic class on the fly. Is that even possible ? // set("listofpolymorphicTypes",buildNonGenericclassOnTheFly(Arrays.asList(new A(123),new B(456f)),List.class,Type.class),set("listofpolymorphicTypes",new polymorphicTypeList(Arrays.asList(new A(123),new B(456f)))),currentDate("creationDate"),currentTimestamp("_id")); FindOneAndUpdateOptions findOptions = new FindOneAndUpdateOptions(); findOptions.upsert(true); findOptions.returnDocument(ReturnDocument.AFTER); MongoCollection<Pojo> pojoMongoCollection = mongoClient.getDatabase("test").getCollection("documents").withDocumentClass(Pojo.class); Pojo pojo = pojoMongoCollection.findOneAndUpdate(Filters.and(Filters.lt(DBCollection.ID_FIELD_NAME,0),Filters.gt(DBCollection.ID_FIELD_NAME,0)),update,findOptions); assertNotNull(pojo.id); }
private SmofInsertResult replace(T element,SmofOpOptions options) { final SmofInsertResult result = new SmofInsertResultImpl(); result.setSuccess(true); options.upsert(true); if(options.isBypassCache() || !cache.asMap().containsValue(element)) { final BsonDocument document = parser.toBson(element); final Bson query = createUniquenessQuery(document); result.setPostInserts(BsonUtils.extrackPosInsertions(document)); options.setReturnDocument(ReturnDocument.AFTER); document.remove(Element.ID); final BsonDocument resdoc = collection.findOneAndReplace(query,document,options.toFindOneAndReplace()); element.setId(resdoc.get(Element.ID).asObjectId().getValue()); cache.put(element.getId(),element); } return result; }
@Override public Long getNextIdGen(Long interval) { Document realUpdate = getIncUpdateObject(getUpdateObject(interval)); FindOneAndUpdateOptions options = new FindOneAndUpdateOptions() .upsert(true) .returnDocument(ReturnDocument.AFTER); Document ret = getIdGenCollection().findOneAndUpdate(getQueryObject(),realUpdate,options); if (ret == null) return null; Boolean valid = (Boolean) ret.get(VALID); if (valid != null && !valid) { throw RaptureExceptionFactory.create(HttpURLConnection.HTTP_BAD_REQUEST,mongoMsgCatalog.getMessage("IdGenerator")); } return (Long) ret.get(SEQ); }
@Override public void updateRow(String rowId,Map<String,Object> recordValues) { String key = getKey(rowId); // stupid key is row id plus "l/" prepended // to it MongoCollection<Document> collection = MongoDBFactory.getCollection(instanceName,tableName); Document query = new Document(); query.put(KEY,key); Document toPut = new Document(); toPut.put(KEY,key); toPut.put(ROWID,rowId); toPut.put(EPOCH,EpochManager.nextEpoch(collection)); toPut.putAll(recordValues); FindOneAndUpdateOptions options = new FindOneAndUpdateOptions().upsert(true).returnDocument(ReturnDocument.AFTER); @SuppressWarnings("unused") Document ret = collection.findOneAndUpdate(query,new Document($SET,toPut),options); }
@ExtDirectMethod(ExtDirectMethodType.FORM_POST) public ExtDirectFormPostResult resetRequest( @RequestParam("email") String emailOrLoginName) { String token = UUID.randomUUID().toString(); User user = this.mongoDb.getCollection(User.class).findOneAndUpdate( Filters.and( Filters.or(Filters.eq(CUser.email,emailOrLoginName),Filters.eq(CUser.loginName,emailOrLoginName)),Filters.eq(CUser.deleted,false)),Updates.combine( Updates.set(CUser.passwordResetTokenValidUntil,Date.from(zoneddatetime.Now(ZoneOffset.UTC).plusHours(4) .toInstant())),Updates.set(CUser.passwordResetToken,token)),new FindOneAndUpdateOptions().returnDocument(ReturnDocument.AFTER) .upsert(false)); if (user != null) { this.mailService.sendPasswortResetEmail(user); } return new ExtDirectFormPostResult(); }
/** * Saves a set of {@link ProtectedRegion} for the specified world to database. * * @param world The name of the world * @param set The {@link Set} of regions * @throws StorageException Thrown if something goes wrong during database query */ public void saveAll(final String world,Set<ProtectedRegion> set) throws StorageException { MongoCollection<ProcessingProtectedRegion> collection = getCollection(); final atomicreference<Throwable> lastError = new atomicreference<>(); final CountDownLatch waiter = new CountDownLatch(set.size()); for (final ProtectedRegion region : set) { if (listener != null) listener.beforeDatabaseUpdate(world,region); collection.findOneAndUpdate( Filters.and(Filters.eq("name",region.getId()),Filters.eq("world",world)),new ProcessingProtectedRegion(region,new FindOneAndUpdateOptions().upsert(true).returnDocument(ReturnDocument.AFTER),OperationResultCallback.create(lastError,waiter,new UpdateCallback(world)) ); } ConcurrentUtils.safeAwait(waiter); Throwable realLastError = lastError.get(); if (realLastError != null) throw new StorageException("An error occurred while saving or updating in MongoDB.",realLastError); }
@Override public io.vertx.ext.mongo.MongoClient findOneAndUpdateWithOptions(String collection,JsonObject query,JsonObject update,FindOptions findOptions,UpdateOptions updateOptions,Handler<AsyncResult<JsonObject>> resultHandler) { requireNonNull(collection,"collection cannot be null"); requireNonNull(query,"query cannot be null"); requireNonNull(update,"update cannot be null"); requireNonNull(findOptions,"find options cannot be null"); requireNonNull(updateOptions,"update options cannot be null"); requireNonNull(resultHandler,"resultHandler cannot be null"); JsonObject encodedQuery = encodeKeyWhenUSEObjectId(query); Bson bquery = wrap(encodedQuery); Bson bupdate = wrap(update); FindOneAndUpdateOptions foauOptions = new FindOneAndUpdateOptions(); foauOptions.sort(wrap(findOptions.getSort())); foauOptions.projection(wrap(findOptions.getFields())); foauOptions.upsert(updateOptions.isUpsert()); foauOptions.returnDocument(updateOptions.isReturningNewDocument() ? ReturnDocument.AFTER : ReturnDocument.BEFORE); MongoCollection<JsonObject> coll = getCollection(collection); coll.findOneAndUpdate(bquery,bupdate,foauOptions,wrapCallback(resultHandler)); return this; }
@Override public io.vertx.ext.mongo.MongoClient findOneAndReplaceWithOptions(String collection,JsonObject replace,"query cannot be null"); requireNonNull(findOptions,"resultHandler cannot be null"); JsonObject encodedQuery = encodeKeyWhenUSEObjectId(query); Bson bquery = wrap(encodedQuery); FindOneAndReplaceOptions foarOptions = new FindOneAndReplaceOptions(); foarOptions.sort(wrap(findOptions.getSort())); foarOptions.projection(wrap(findOptions.getFields())); foarOptions.upsert(updateOptions.isUpsert()); foarOptions.returnDocument(updateOptions.isReturningNewDocument() ? ReturnDocument.AFTER : ReturnDocument.BEFORE); MongoCollection<JsonObject> coll = getCollection(collection); coll.findOneAndReplace(bquery,replace,foarOptions,wrapCallback(resultHandler)); return this; }
/** * Configures this modifier so that new (updated) version of document will be returned in * case of successful update. * @see #returningOld() * @return {@code this} modifier for chained invocation */ // safe unchecked: we expect I to be a self type @SuppressWarnings("unchecked") public final M returningNew() { options.returnDocument(ReturnDocument.AFTER); return (M) this; }
@Override public Optional<T> reschedule(final T item) { validateReschedule(item); final Document query = new Document(); query.put("_id",new ObjectId(item.getId())); query.put("heartbeat",DateUtil.toDate(item.getHeartbeat())); final Document setDoc = new Document(); setDoc.put("heartbeat",null); setDoc.put("runDate",DateUtil.toDate(item.getRunDate())); setDoc.put("status",OkraStatus.PENDING.name()); final Document update = new Document(); update.put("$set",setDoc); final FindOneAndUpdateOptions options = new FindOneAndUpdateOptions(); options.returnDocument(ReturnDocument.AFTER); final Document document = client .getDatabase(getDatabase()) .getCollection(getCollection()) .findOneAndUpdate(query,document)); }
@Override public Optional<T> heartbeatandUpdateCustomAttrs(final T item,final Map<String,Object> attrs) { validateHeartbeat(item); final Document query = new Document(); query.put("_id",new ObjectId(item.getId())); query.put("status",OkraStatus.PROCESSING.name()); query.put("heartbeat",DateUtil.toDate(item.getHeartbeat())); final Document update = new Document(); update.put("$set",new Document("heartbeat",new Date())); if (attrs != null && !attrs.isEmpty()) { attrs.forEach((key,value) -> update.append("$set",new Document(key,value))); } final FindOneAndUpdateOptions options = new FindOneAndUpdateOptions(); options.returnDocument(ReturnDocument.AFTER); final Document result = client .getDatabase(getDatabase()) .getCollection(getCollection()) .findOneAndUpdate(query,options); if (result == null) { return Optional.empty(); } return Optional.ofNullable(serializer.fromDocument(scheduleItemClass,result)); }
public SyncMap saveMapping(SyncMap map) { // Todo : check why this is needed if (map.getMapId() == null) { map.setMapId(new ObjectId()); } return syncMappings.findOneAndReplace(Filters.eq(SyncAttrs.ID,map.getMapId()),map,new FindOneAndReplaceOptions().returnDocument(ReturnDocument.AFTER).upsert(true)); }
public SyncEvent getPendingEvent(List<String> eventTypes) { return syncEvents.findOneAndUpdate( Filters.and(Filters.eq(SyncAttrs.STATUS,Filters.in(SyncAttrs.EVENT_TYPE,eventTypes)),new FindOneAndUpdateOptions().returnDocument(ReturnDocument.AFTER)); }
public SyncEvent saveEvent(SyncEvent event) { if (event.getEventId() == null) { event.setEventId(new ObjectId()); } return syncEvents.findOneAndReplace(Filters.eq(SyncAttrs.ID,event.getEventId()),event,new FindOneAndReplaceOptions().returnDocument(ReturnDocument.AFTER).upsert(true)); }
public SyncConnectionInfo updateConnection(SyncConnectionInfo connInfo) { if(connInfo.getConnectionId() == null){ connInfo.setConnectionId(new ObjectId()); } return connectionInfo.findOneAndReplace( Filters.eq(String.valueOf(ConnectionInfoAttributes._id),connInfo.getConnectionId()),connInfo,new FindOneAndReplaceOptions().returnDocument(ReturnDocument.AFTER).upsert(true)); }
public SyncNode getFailedNode(long lastPingTime) { SyncNode FailedNode = syncNodeMapping.findOneAndUpdate( Filters.and(Filters.lte(SyncAttrs.LAST_PING_TIME,lastPingTime),Filters.eq(SyncAttrs.LIFE_CYCLE,SyncConfig.INSTANCE.getDbProperty(SyncConstants.LIFE))),Updates.set(SyncAttrs.LAST_PING_TIME,System.currentTimeMillis()),new FindOneAndUpdateOptions().upsert(false).returnDocument(ReturnDocument.BEFORE)); if (FailedNode != null && FailedNode.getFailureTime() == 0) { syncNodeMapping.findOneAndUpdate(Filters.eq(SyncAttrs.ID,FailedNode.getId()),Updates.set(SyncAttrs.FAILURE_TIME,FailedNode.getLastPingTime())); } return FailedNode; }
public MngToOrclSyncWriter(BlockingQueue<Document> dataBuffer,MongoToOracleMap map,Syncmarker marker,CountDownLatch latch,boolean isRestrictedSyncEnabled,ObjectId eventId) { super(); this.dataBuffer = dataBuffer; this.map = map; this.marker = marker; this.latch = latch; this.isRestrictedSyncEnabled = isRestrictedSyncEnabled; this.eventId = eventId; this.options = new FindOneAndUpdateOptions(); options.returnDocument(ReturnDocument.BEFORE); }
public String createNewLotCodeForTransportation() { Document updateLotCode = collection.findOneAndUpdate(exists("lotConfiguration.lastInsertionTransportation"),new Document("$inc",new Document("lotConfiguration.lastInsertionTransportation",1)),new FindOneAndUpdateOptions().returnDocument(ReturnDocument.AFTER)); LaboratoryConfiguration laboratoryConfiguration = LaboratoryConfiguration.deserialize(updateLotCode.toJson()); return laboratoryConfiguration.getLotConfiguration().getLastInsertionTransportation().toString(); }
public String createNewLotCodeForExam() { Document updateLotCode = collection.findOneAndUpdate(exists("lotConfiguration.lastInsertionExam"),new Document("lotConfiguration.lastInsertionExam",new FindOneAndUpdateOptions().returnDocument(ReturnDocument.AFTER)); LaboratoryConfiguration laboratoryConfiguration = LaboratoryConfiguration.deserialize(updateLotCode.toJson()); return laboratoryConfiguration.getLotConfiguration().getLastInsertionExam().toString(); }
@Override public void execUpdate(Bson filter,Bson update,SmofOpOptions options) { options.setReturnDocument(ReturnDocument.AFTER); final BsonDocument result = collection.findOneAndUpdate(filter,options.toFindOneAndUpdateOptions()); final T element = parser.fromBson(result,type); cache.put(result.getobjectId(Element.ID).getValue(),element); }
@Override public void put(String key,String value) { MongoCollection<Document> collection = getCollection(); Document query = new Document(KEY,key); Document toPut = new Document($SET,new Document(KEY,key).append(VALUE,value)); FindOneAndUpdateOptions options = new FindOneAndUpdateOptions().upsert(true).returnDocument(ReturnDocument.BEFORE); Document result = collection.findOneAndUpdate(query,toPut,options); if (needsFolderHandling && result == null) { dirRepo.registerParentage(key); } }
private void saveDocument(String key,String column,Object val) { registerKey(key); MongoCollection<Document> collection = getCollection(key); Document dbkey = new Document(ROWKEY,key).append(COLKEY,column); Document dbval = new Document($SET,new Document(ROWKEY,column).append(VALKEY,val)); FindOneAndUpdateOptions options = new FindOneAndUpdateOptions().upsert(true).returnDocument(ReturnDocument.AFTER); try { @SuppressWarnings("unused") Document ret = collection.findOneAndUpdate(dbkey,dbval,options); } catch (MongoException me) { throw RaptureExceptionFactory.create(HttpURLConnection.HTTP_INTERNAL_ERROR,new ExceptionToString(me)); } }
/** * Returns the next epoch available and advances the counter. Guaranteed to * be unique for the given collection. If the epoch document does not * already exist a new one is created and the first epoch returned will be * 1L. * * @param collection * - the MongoCollection to the get next epoch for * @return Long - a unique epoch value for this collection */ public static Long nextEpoch(final MongoCollection<Document> collection) { final FindOneAndUpdateOptions options = new FindOneAndUpdateOptions().upsert(true).returnDocument(ReturnDocument.AFTER); MongoRetryWrapper<Long> wrapper = new MongoRetryWrapper<Long>() { public Long action(FindIterable<Document> cursor) { Document ret = collection.findOneAndUpdate(getEpochQueryObject(),getIncUpdateObject(getUpdateObject()),options); return (Long) ret.get(SEQ); } }; return wrapper.doAction(); }
@ExtDirectMethod public void sendPassordResetEmail(String userId) { String token = UUID.randomUUID().toString(); User user = this.mongoDb.getCollection(User.class).findOneAndUpdate( Filters.eq(CUser.id,userId),new FindOneAndUpdateOptions().returnDocument(ReturnDocument.AFTER)); this.mailService.sendPasswortResetEmail(user); }
@Override public void onAuthenticationSuccess(HttpServletRequest request,HttpServletResponse response,Authentication authentication) throws IOException,servletexception { Map<String,Object> result = new HashMap<>(); result.put("success",true); MongoUserDetails userDetails = (MongoUserDetails) authentication.getPrincipal(); if (userDetails != null) { User user; if (!userDetails.isPreAuth()) { user = this.mongoDb.getCollection(User.class).findOneAndUpdate( Filters.eq(CUser.id,userDetails.getUserDbId()),Updates.set(CUser.lastAccess,new Date()),new FindOneAndUpdateOptions() .returnDocument(ReturnDocument.AFTER)); } else { user = this.mongoDb.getCollection(User.class) .find(Filters.eq(CUser.id,userDetails.getUserDbId())).first(); } result.put(SecurityService.AUTH_USER,new UserDetailDto(userDetails,user,CsrfController.getCsrftoken(request))); } response.setCharacterEncoding("UTF-8"); response.getWriter().print(this.objectMapper.writeValueAsstring(result)); response.getWriter().flush(); }
/** * Configures this modifier so that new (updated) version of document will be returned in * case of successful update. * @see #returningOld() * @return {@code this} modifier for chained invocation */ // safe unchecked: we expect I to be a self type @SuppressWarnings("unchecked") public final M returningNew() { options.returnDocument(ReturnDocument.AFTER); return (M) this; }
public SyncUserSession saveSession(SyncUserSession userSession) { return userSessionCollection.findOneAndReplace( Filters.eq(String.valueOf(SessionAttributes._id),userSession.getSessionId()),userSession,new FindOneAndReplaceOptions().returnDocument(ReturnDocument.AFTER).upsert(true)); }
public void setEncryptedPassword(ObjectId id,byte[] pass) { connectionInfo.findOneAndUpdate( Filters.and(Filters.eq(String.valueOf(ConnectionInfoAttributes._id),id)),Updates.set(String.valueOf(ConnectionInfoAttributes.password),pass),new FindOneAndUpdateOptions().returnDocument(ReturnDocument.AFTER)); }
public SyncNode updateNodeDetails(SyncNode nodeMapper) { return syncNodeMapping.findOneAndReplace(Filters.eq(SyncAttrs.ID,nodeMapper.getId()),nodeMapper,new FindOneAndReplaceOptions().returnDocument(ReturnDocument.AFTER).upsert(true)); }
public SyncNode getNodeDetails(SyncNode nodeMapper) { Bson filter = Filters.eq(SyncAttrs.UUID,nodeMapper.getUUID()); logger.info("Getting node with filter " + filter); return syncNodeMapping.findOneAndUpdate(filter,Updates.unset(SyncAttrs.FAILURE_TIME),new FindOneAndUpdateOptions().returnDocument(ReturnDocument.AFTER)); }
public SyncUser updateUser(SyncUser user) { return userDetailsCollection.findOneAndReplace( Filters.eq(String.valueOf(UserDetailAttributes._id),user.getUserid()),new FindOneAndReplaceOptions().returnDocument(ReturnDocument.AFTER) .upsert(true)); }
public void saveFlow(FlowEntity flowEntity) { if (flowEntity.getId() == null) { Integer nextId; Document lastFlow = collection.find() .projection(new Document("flow_id",1)) .sort(new Document("flow_id",-1)) .limit(1) .first(); if (lastFlow == null) { nextId = 1; } else { nextId = lastFlow.getInteger("flow_id") + 1; } flowEntity.setId(nextId); } Document filter = new Document() .append("flow_id",flowEntity.getId()); List<Document> conditionList = flowEntity.getConditionList() .stream() .map(conditionEntity -> new Document() .append("dev_id",conditionEntity.getDevId()) .append("dev_type",conditionEntity.getDevType()) .append("type",conditionEntity.getType()) .append("parameter",conditionEntity.getParameter())) .collect(Collectors.toList()); List<Document> actionList = flowEntity.getActionList() .stream() .map(conditionEntity -> new Document() .append("dev_id",conditionEntity.getParameter())) .collect(Collectors.toList()); Document entityDocument = new Document() .append("$set",new Document() .append("flow_id",flowEntity.getId()) .append("name",flowEntity.getName()) .append("order_num",flowEntity.getorderNum()) .append("conditions",conditionList) .append("actions",actionList) ); FindOneAndUpdateOptions options = new FindOneAndUpdateOptions() .returnDocument(ReturnDocument.AFTER) .upsert(true); collection.findOneAndUpdate(filter,entityDocument,options); }
public SmofOpOptionsImpl() { upsert = false; validateDocuments = true; ret = ReturnDocument.AFTER; bypassCache = false; }
@Override public void setReturnDocument(ReturnDocument doc) { this.ret = doc; }
private void updateLockedProperties(AuthenticationFailureBadCredentialsEvent event) { Object principal = event.getAuthentication().getPrincipal(); if (this.loginLockAttempts != null && (principal instanceof String || principal instanceof MongoUserDetails)) { User user = null; if (principal instanceof String) { user = this.mongoDb.getCollection(User.class).findOneAndUpdate( Filters.and(Filters.eq(CUser.loginName,principal),Updates.inc(CUser.FailedLogins,1),new FindOneAndUpdateOptions() .returnDocument(ReturnDocument.AFTER).upsert(false)); } else { user = this.mongoDb.getCollection(User.class).findOneAndUpdate( Filters.eq(CUser.id,((MongoUserDetails) principal).getUserDbId()),new FindOneAndUpdateOptions() .returnDocument(ReturnDocument.AFTER).upsert(false)); } if (user != null) { if (user.getFailedLogins() >= this.loginLockAttempts) { if (this.loginLockMinutes != null) { this.mongoDb.getCollection(User.class).updateOne( Filters.eq(CUser.id,user.getId()),Updates.set(CUser.lockedOutUntil,Date.from(zoneddatetime.Now(ZoneOffset.UTC) .plusMinutes(this.loginLockMinutes) .toInstant()))); } else { this.mongoDb.getCollection(User.class) .updateOne(Filters.eq(CUser.id,Date.from(zoneddatetime .Now(ZoneOffset.UTC) .plusYears(1000).toInstant()))); } } } else { Application.logger.warn("UnkNown user login attempt: {}",principal); } } else { Application.logger.warn("Invalid login attempt: {}",principal); } }
我们今天的关于使用@Document的mongodb多租户游戏和mongodb 多租户的分享就到这里,谢谢您的阅读,如果想了解更多关于@Document批注中的MongoDB和SpEL表达式、Amazon Document DB中的Mongodb聚合功能 插入文档查询输出、aws documentdb 是否验证双向 ssl 的 mongodb 客户端证书?、com.mongodb.client.model.ReturnDocument的实例源码的相关信息,可以在本站进行搜索。
本文标签: