针对Quartz2.2多调度程序和@DisallowConcurrentExecution和quartz调度原理这两个问题,本篇文章进行了详细的解答,同时本文还将给你拓展@DisallowConcur
针对Quartz 2.2多调度程序和@DisallowConcurrentExecution和quartz 调度原理这两个问题,本篇文章进行了详细的解答,同时本文还将给你拓展@DisallowConcurrentExecution 注解的作用 【定时器执行完当前任务才开启下一个线程的方式】、coding++:@DisallowConcurrentExecution 注解的作用、com.google.common.util.concurrent.ExecutionError的实例源码、com.google.common.util.concurrent.ExecutionList的实例源码等相关知识,希望可以帮助到你。
本文目录一览:- Quartz 2.2多调度程序和@DisallowConcurrentExecution(quartz 调度原理)
- @DisallowConcurrentExecution 注解的作用 【定时器执行完当前任务才开启下一个线程的方式】
- coding++:@DisallowConcurrentExecution 注解的作用
- com.google.common.util.concurrent.ExecutionError的实例源码
- com.google.common.util.concurrent.ExecutionList的实例源码
Quartz 2.2多调度程序和@DisallowConcurrentExecution(quartz 调度原理)
请考虑这个例子。
一个示例Web应用程序要求scheduler.start()
其启动。调度程序配置为将其作业存储在DB中。
该应用程序被复制到六个Web服务器上。
因此,如果我们启动六个Web服务器,则在单个DB上将有六个具有相同名称的调度程序。如https://quartz-
scheduler.org/documentation/quartz-2.1.x/cookbook/MultipleSchedulers中所述:
切勿针对运行(start()ed)具有相同调度程序名称的任何其他实例的数据库表集启动非集群实例(scheduler.start())。您可能会遇到严重的数据损坏,并且肯定会遇到不稳定的行为。
因此,这将失败。
我的问题是,如果我确定我所有的工作@DisallowConcurrentExecution
都会胜任工作,否则仍然失败?
如果@DisallowConcurrentExecution
没有帮助,我应该手动将一台服务器配置为某些主机
public class StartUp implements ServletContextListener { public void contextInitialized(ServletContextEvent event) { if(THIS_IS_MASTER_TOMCAT){ scheduler.start() }}
有更好的方法吗?
答案1
小编典典基本上Rene M.是正确的。这是与Quartz有关的文档:
http://www.quartz-
scheduler.org/documentation/quartz-2.2.x/configuration/ConfigJDBCJobStoreClustering.html
现在介绍一些我们自己在公司使用的背景和概念性示例。我们 在
Wildfly群集中使用石英群集模式。也就是说,每个Wildfly群集节点都运行石英。由于quartz本身以群集模式运行,并且指向相同的数据库模式,因此我们保证每个
群集 运行一个作业。同样,请参阅文档。关键问题是:
单个石英群集必须针对单个石英数据库
架构运行。显然,您必须根据文档创建关系数据库表。没关系您必须正确设置quartz.property文件,并且群集中的每个节点都必须存在一个副本。 完全相同的quartz.property文件
- 最后,您必须使用NonJTA数据源,否则石英群集将失败。这通常意味着在Wildfly世界中,石英将需要其自己的数据源。
石英石属性示例:
#============================================================================# Configure Main Scheduler Properties #============================================================================org.quartz.scheduler.instanceName = BjondSchedulerorg.quartz.scheduler.instanceId = AUTO#============================================================================# Configure ThreadPool #============================================================================org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPoolorg.quartz.threadPool.threadCount = 5#============================================================================# Configure JobStore #============================================================================org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreCMTorg.quartz.jobStore.driverDelegateClass=org.quartz.impl.jdbcjobstore.PostgreSQLDelegateorg.quartz.jobStore.useProperties = falseorg.quartz.jobStore.tablePrefix=QRTZ_org.quartz.jobStore.isClustered = trueorg.quartz.jobStore.clusterCheckinInterval = 5000org.quartz.scheduler.wrapJobExecutionInUserTransaction = trueorg.quartz.scheduler.userTransactionURL = java:jboss/UserTransactionorg.quartz.jobStore.dataSource = PostgreSQLDSorg.quartz.jobStore.nonManagedTXDataSource = PostgreSQLDSNoJTAorg.quartz.dataSource.PostgreSQLDSNoJTA.jndiURL=java:jboss/datasources/PostgreSQLDSNoJTAorg.quartz.dataSource.PostgreSQLDS.jndiURL=java:jboss/datasources/PostgreSQLDS#============================================================================# Configure Logging#============================================================================#org.quartz.plugin.jobHistory.class=org.quartz.plugins.history.LoggingJobHistoryPlugin#org.quartz.plugin.jobHistory.jobToBeFiredMessage=Bjond Job [{1}.{0}] to be fired by trigger [{4}.{3}] at: {2, date, HH:mm:ss MM/dd/yyyy} re-fire count: {7}#org.quartz.plugin.jobHistory.jobSuccessMessage=Bjond Job [{1}.{0}] execution complete and reports: {8}#org.quartz.plugin.jobHistory.jobFailedMessage=Bjond Job [{1}.{0}] execution failed with exception: {8}#org.quartz.plugin.jobHistory.jobWasVetoedMessage=Bjond Job [{1}.{0}] was vetoed. It was to be fired by trigger [{4}.{3}] at: {2, date, dd-MM-yyyy HH:mm:ss.SSS}
现在,standalone.xml中的数据源代码段:
<datasource jta="false" jndi-name="java:jboss/datasources/PostgreSQLDSNoJTA" pool-name="PostgreSQLDSNoJTA" enabled="true" use-java-context="true" use-ccm="true">
您可以根据需要填写此数据源元素的其余部分。@DisallowConcurrentExecution是一个好主意,它可以防止单个节点上的多个作业执行特定的方法,但是石英集群阻止了同一作业在多个VM上运行。不是这个注释。
@DisallowConcurrentExecution 注解的作用 【定时器执行完当前任务才开启下一个线程的方式】

转:
@DisallowConcurrentExecution 注解的作用
Quartz 定时任务默认都是并发执行的,不会等待上一次任务执行完毕,只要间隔时间到就会执行,如果定时任执行太长,会长时间占用资源,导致其它任务堵塞。
在 Spring 中这时需要设置 concurrent 的值为 false, 禁止并发执行。
<property name="concurrent" value="true" />
当不使用 spring 的时候就需要在 Job 的实现类上加 @DisallowConcurrentExecution 的注释
@DisallowConcurrentExecution 禁止并发执行多个相同定义的 JobDetail, 这个注解是加在 Job 类上的,但意思并不是不能同时执行多个 Job, 而是不能并发执行同一个 Job Definition (由 JobDetail 定义), 但是可以同时执行多个不同的 JobDetail, 举例说明,我们有一个 Job 类,叫做 SayHelloJob, 并在这个 Job 上加了这个注解,然后在这个 Job 上定义了很多个 JobDetail, 如 sayHelloToJoeJobDetail, sayHelloToMikeJobDetail, 那么当 scheduler 启动时,不会并发执行多个 sayHelloToJoeJobDetail 或者 sayHelloToMikeJobDetail, 但可以同时执行 sayHelloToJoeJobDetail 跟 sayHelloToMikeJobDetail
- @PersistJobDataAfterExecution 同样,也是加在 Job 上,表示当正常执行完 Job 后,JobDataMap 中的数据应该被改动,以被下一次调用时用。当使用 @PersistJobDataAfterExecution 注解时,为了避免并发时,存储数据造成混乱,强烈建议把
- @DisallowConcurrentExecution 注解也加上。
@DisallowConcurrentExecution
- 此标记用在实现 Job 的类上面,意思是不允许并发执行,按照我之前的理解是 不允许调度框架在同一时刻调用 Job 类,后来经过测试发现并不是这样,而是 Job (任务) 的执行时间 [比如需要 10 秒] 大于任务的时间间隔 [Interval(5 秒)], 那么默认情况下,调度框架为了能让 任务按照我们预定的时间间隔执行,会马上启用新的线程执行任务。否则的话会等待任务执行完毕以后 再重新执行!(这样会导致任务的执行不是按照我们预先定义的时间间隔执行)
- 测试代码,这是官方提供的例子。设定的时间间隔为 3 秒,但 job 执行时间是 5 秒,设置 @DisallowConcurrentExecution 以后程序会等任务执行完毕以后再去执行,否则会在 3 秒时再启用新的线程执行
coding++:@DisallowConcurrentExecution 注解的作用
Quartz 定时任务默认都是并发执行的,不会等待上一次任务执行完毕,只要间隔时间到就会执行,如果定时任执行太长,会长时间占用资源,导致其它任务堵塞。
在 Spring 中这时需要设置 concurrent 的值为 false, 禁止并发执行。
<property name="concurrent" value="true" />
当不使用 spring 的时候就需要在 Job 的实现类上加 @DisallowConcurrentExecution 的注释
@DisallowConcurrentExecution 禁止并发执行多个相同定义的 JobDetail, 这个注解是加在 Job 类上的,
但意思并不是不能同时执行多个 Job, 而是不能并发执行同一个 Job Definition (由 JobDetail 定义),
但是可以同时执行多个不同的 JobDetail, 举例说明,我们有一个 Job 类,叫做 SayHelloJob, 并在这个 Job 上加了这个注解,然后在这个 Job 上定义了很多个 JobDetail,
如 sayHelloToJoeJobDetail, sayHelloToMikeJobDetail, 那么当 scheduler 启动时,不会并发执行多个 sayHelloToJoeJobDetail 或者 sayHelloToMikeJobDetail,
但可以同时执行 sayHelloToJoeJobDetail 跟 sayHelloToMikeJobDetail
@PersistJobDataAfterExecution 同样,也是加在 Job 上,表示当正常执行完 Job 后,JobDataMap 中的数据应该被改动,以被下一次调用时用。
当使用 @PersistJobDataAfterExecution 注解时,为了避免并发时,存储数据造成混乱,强烈建议把 @DisallowConcurrentExecution 注解也加上。
@DisallowConcurrentExecution
此标记用在实现 Job 的类上面,意思是不允许并发执行,按照我之前的理解是 不允许调度框架在同一时刻调用 Job 类,
后来经过测试发现并不是这样,而是 Job (任务) 的执行时间 [比如需要 10 秒] 大于任务的时间间隔 [Interval(5 秒)],
那么默认情况下,调度框架为了能让 任务按照我们预定的时间间隔执行,会马上启用新的线程执行任务。
否则的话会等待任务执行完毕以后 再重新执行!(这样会导致任务的执行不是按照我们预先定义的时间间隔执行)
测试代码,这是官方提供的例子。设定的时间间隔为 3 秒,但 job 执行时间是 5 秒,设置 @DisallowConcurrentExecution 以后程序会等任务执行完毕以后再去执行,否则会在 3 秒时再启用新的线程执行
com.google.common.util.concurrent.ExecutionError的实例源码
public ReloadableSslContext get( File trustCertificatesFile,Optional<File> clientCertificatesFile,Optional<File> privateKeyFile,Optional<String> privateKeyPassword,long sessionCacheSize,Duration sessionTimeout,List<String> ciphers) { try { return cache.getUnchecked(new SslContextConfig(trustCertificatesFile,clientCertificatesFile,privateKeyFile,privateKeyPassword,sessionCacheSize,sessionTimeout,ciphers)); } catch (UncheckedExecutionException | ExecutionError e) { throw new RuntimeException("Error initializing SSL context",e.getCause()); } }
public void testBulkLoadError() throws ExecutionException { Error e = new Error(); CacheLoader<Object,Object> loader = errorLoader(e); LoadingCache<Object,Object> cache = CacheBuilder.newBuilder() .recordStats() .build(bulkLoader(loader)); CacheStats stats = cache.stats(); assertEquals(0,stats.missCount()); assertEquals(0,stats.loadSuccessCount()); assertEquals(0,stats.loadExceptionCount()); assertEquals(0,stats.hitCount()); try { cache.getAll(asList(new Object())); fail(); } catch (ExecutionError expected) { assertSame(e,expected.getCause()); } stats = cache.stats(); assertEquals(1,stats.loadSuccessCount()); assertEquals(1,stats.hitCount()); }
public void testBulkLoadError() throws ExecutionException { Error e = new Error(); CacheLoader<Object,stats.hitCount()); }
ondemandShardState get() throws Exception { if (shardActor == null) { return ondemandShardState.newBuilder().build(); } try { return ondemand_SHARD_STATE_CACHE.get(shardName,this::retrieveState); } catch (ExecutionException | UncheckedExecutionException | ExecutionError e) { if (e.getCause() != null) { Throwables.propagateIfPossible(e.getCause(),Exception.class); throw new RuntimeException("unexpected",e.getCause()); } throw e; } }
private static DB constructDB(String dbFile) { DB db; try{ DBMaker dbMaker = DBMaker.newFileDB(new File(dbFile)); db = dbMaker .transactiondisable() .mmapFileEnable() .asyncWriteEnable() .compressionEnable() // .cacheSize(1024 * 1024) this bloats memory consumption .make(); return db; } catch (ExecutionError | IOError | Exception e) { LOG.error("Could not construct db from file.",e); return null; } }
@Override public void acquireLock(final StaticBuffer key,final StaticBuffer column,final StaticBuffer expectedValue,final StoreTransaction txh) throws BackendException { final DynamoDbStoreTransaction tx = DynamoDbStoreTransaction.getTx(txh); final Pair<StaticBuffer,StaticBuffer> keyColumn = Pair.of(key,column); final DynamoDbStoreTransaction existing; try { existing = keyColumnLocalLocks.get(keyColumn,() -> tx); } catch (ExecutionException | UncheckedExecutionException | ExecutionError e) { throw new TemporaryLockingException("Unable to acquire lock",e); } if (null != existing && tx != existing) { throw new TemporaryLockingException(String.format("tx %s already locked key-column %s when tx %s tried to lock",existing.toString(),keyColumn.toString(),tx.toString())); } // Titan's locking expects that only the first expectedValue for a given key/column should be used tx.putKeyColumnOnlyIfItIsNotYetChangedInTx(this,key,column,expectedValue); }
public void testBulkLoadError() throws ExecutionException { Error e = new Error(); CacheLoader<Object,stats.hitCount()); }
public OperatorFactory compileJoinoperatorFactory(int operatorId,PlanNodeId planNodeId,Lookupsourcesupplier lookupsourcesupplier,List<? extends Type> probeTypes,List<Integer> probeJoinChannel,Optional<Integer> probeHashChannel,JoinType joinType) { try { HashJoinoperatorFactoryFactory operatorFactoryFactory = joinProbeFactories.get(new JoinoperatorCacheKey(probeTypes,probeJoinChannel,probeHashChannel,joinType)); return operatorFactoryFactory.createHashJoinoperatorFactory(operatorId,planNodeId,lookupsourcesupplier,probeTypes,joinType); } catch (ExecutionException | UncheckedExecutionException | ExecutionError e) { throw Throwables.propagate(e.getCause()); } }
public void testBulkLoadError() throws ExecutionException { Error e = new Error(); CacheLoader<Object,Object> cache = CacheBuilder.newBuilder().recordStats().build(bulkLoader(loader)); CacheStats stats = cache.stats(); assertEquals(0,stats.hitCount()); }
V get(K key,int hash,CacheLoader<? super K,V> loader) throws ExecutionException { checkNotNull(key); checkNotNull(loader); try { if (count != 0) { // read-volatile // don't call getLiveEntry,which would ignore loading values ReferenceEntry<K,V> e = getEntry(key,hash); if (e != null) { long Now = map.ticker.read(); V value = getLiveValue(e,Now); if (value != null) { recordRead(e,Now); statsCounter.recordHits(1); return scheduleRefresh(e,hash,value,Now,loader); } ValueReference<K,V> valueReference = e.getValueReference(); if (valueReference.isLoading()) { return waitForLoadingValue(e,valueReference); } } } // at this point e is either null or expired; return lockedGetorLoad(key,loader); } catch (ExecutionException ee) { Throwable cause = ee.getCause(); if (cause instanceof Error) { throw new ExecutionError((Error) cause); } else if (cause instanceof RuntimeException) { throw new UncheckedExecutionException(cause); } throw ee; } finally { postReadCleanup(); } }
V get(K key,loader); } catch (ExecutionException ee) { Throwable cause = ee.getCause(); if (cause instanceof Error) { throw new ExecutionError((Error) cause); } else if (cause instanceof RuntimeException) { throw new UncheckedExecutionException(cause); } throw ee; } finally { postReadCleanup(); } }
V get(K key,V> loader) throws ExecutionException { checkNotNull(key); checkNotNull(loader); try { if (count != 0) { // read-volatile // don't call getLiveEntry,which would ignore loading values ReferenceEntry<K,hash); if (e != null) { long Now = map.ticker.read(); V value = getLiveValue(e,Now); if (value != null) { recordRead(e,Now); statsCounter.recordHits(1); return scheduleRefresh(e,loader); } ValueReference<K,V> valueReference = e.getValueReference(); if (valueReference.isLoading()) { return waitForLoadingValue(e,valueReference); } } } // at this point e is either null or expired; return lockedGetorLoad(key,loader); } catch (ExecutionException ee) { Throwable cause = ee.getCause(); if (cause instanceof Error) { throw new ExecutionError((Error) cause); } else if (cause instanceof RuntimeException) { throw new UncheckedExecutionException(cause); } throw ee; } finally { postReadCleanup(); } }
V get(K key,loader); } catch (ExecutionException ee) { Throwable cause = ee.getCause(); if (cause instanceof Error) { throw new ExecutionError((Error) cause); } else if (cause instanceof RuntimeException) { throw new UncheckedExecutionException(cause); } throw ee; } finally { postReadCleanup(); } }
V get(K key,loader); } catch (ExecutionException ee) { Throwable cause = ee.getCause(); if (cause instanceof Error) { throw new ExecutionError((Error) cause); } else if (cause instanceof RuntimeException) { throw new UncheckedExecutionException(cause); } throw ee; } finally { postReadCleanup(); } }
V get(K key,loader); } catch (ExecutionException ee) { Throwable cause = ee.getCause(); if (cause instanceof Error) { throw new ExecutionError((Error) cause); } else if (cause instanceof RuntimeException) { throw new UncheckedExecutionException(cause); } throw ee; } finally { postReadCleanup(); } }
private int getorCreateNodeId(String nodeIdentifier) { try { return nodeIdCache.getUnchecked(nodeIdentifier); } catch (UncheckedExecutionException | ExecutionError e) { throw Throwables.propagate(e.getCause()); } }
public PagesIndexOrdering compilePagesIndexOrdering(List<Type> sortTypes,List<Integer> sortChannels,List<SortOrder> sortOrders) { requireNonNull(sortTypes,"sortTypes is null"); requireNonNull(sortChannels,"sortChannels is null"); requireNonNull(sortOrders,"sortOrders is null"); try { return pagesIndexOrderings.get(new PagesIndexComparatorCacheKey(sortTypes,sortChannels,sortOrders)); } catch (ExecutionException | UncheckedExecutionException | ExecutionError e) { throw Throwables.propagate(e.getCause()); } }
public LookupsourceFactory compileLookupsourceFactory(List<? extends Type> types,List<Integer> joinChannels) { try { return lookupsourceFactories.get(new CacheKey(types,joinChannels)); } catch (ExecutionException | UncheckedExecutionException | ExecutionError e) { throw Throwables.propagate(e.getCause()); } }
public PagesHashStrategyFactory compilePagesHashStrategyFactory(List<Type> types,List<Integer> joinChannels) { requireNonNull(types,"types is null"); requireNonNull(joinChannels,"joinChannels is null"); try { return new PagesHashStrategyFactory(hashStrategies.get(new CacheKey(types,joinChannels))); } catch (ExecutionException | UncheckedExecutionException | ExecutionError e) { throw Throwables.propagate(e.getCause()); } }
private static <K,V> V get(LoadingCache<K,V> cache,K key) { try { return cache.get(key); } catch (ExecutionException | UncheckedExecutionException | ExecutionError e) { throw Throwables.propagate(e.getCause()); } }
private static <K,V> Map<K,V> getAll(LoadingCache<K,Iterable<K> keys) { try { return cache.getAll(keys); } catch (ExecutionException | UncheckedExecutionException | ExecutionError e) { throw Throwables.propagate(e.getCause()); } }
V get(K key,loader); } catch (ExecutionException ee) { Throwable cause = ee.getCause(); if (cause instanceof Error) { throw new ExecutionError((Error) cause); } else if (cause instanceof RuntimeException) { throw new UncheckedExecutionException(cause); } throw ee; } finally { postReadCleanup(); } }
V get(K key,loader); } catch (ExecutionException ee) { Throwable cause = ee.getCause(); if (cause instanceof Error) { throw new ExecutionError((Error) cause); } else if (cause instanceof RuntimeException) { throw new UncheckedExecutionException(cause); } throw ee; } finally { postReadCleanup(); } }
@Test(expected = ExecutionError.class) public void testGetError() throws IOException { cache.get(new RareModificationCache.CacheKey<Object>() { @Override public Object computeValue() throws IOException { throw new Error(); } }); }
V get(K key,V> loader) throws ExecutionException { checkNotNull(key); checkNotNull(loader); try { if (count != 0) { // 确保可见性, read-volatile // don't call getLiveEntry,Now);// 获取未过期的数据 if (value != null) { recordRead(e,Now);// access队列,还有最近使用队列 statsCounter.recordHits(1);// 命中啦 return scheduleRefresh(e,loader);// 调度refresh,如果设置了refresh时间的话 } ValueReference<K,V> valueReference = e.getValueReference(); if (valueReference.isLoading()) { // 只有strong引用,其他基本上都是false return waitForLoadingValue(e,valueReference); } } } // at this point e is either null or expired; return lockedGetorLoad(key,loader);// 重新加载数据到cache } catch (ExecutionException ee) { Throwable cause = ee.getCause(); if (cause instanceof Error) { throw new ExecutionError((Error) cause); } else if (cause instanceof RuntimeException) { throw new UncheckedExecutionException(cause); } throw ee; } finally { postReadCleanup(); } }
public void incr(final K key) { try { counters.get(key).increment(); } catch (final UncheckedExecutionException | ExecutionException | ExecutionError e) { LOGGER.error("Error incrementing counter.",e); } }
V get(K key,loader); } catch (ExecutionException ee) { Throwable cause = ee.getCause(); if (cause instanceof Error) { throw new ExecutionError((Error) cause); } else if (cause instanceof RuntimeException) { throw new UncheckedExecutionException(cause); } throw ee; } finally { postReadCleanup(); } }
@Override protected final StringConcatenationClient cacheCall() { StringConcatenationClient _client = new StringConcatenationClient() { @Override protected void appendTo(StringConcatenationClient.TargetStringConcatenation _builder) { _builder.append("try {"); _builder.newLine(); _builder.append("\t"); _builder.append("return "); String _cacheFieldName = ParametrizedMethodMemoizer.this.cacheFieldName(); _builder.append(_cacheFieldName,"\t"); _builder.append(".get("); StringConcatenationClient _parametersToCacheKey = ParametrizedMethodMemoizer.this.parametersToCacheKey(); _builder.append(_parametersToCacheKey,"\t"); _builder.append(");"); _builder.newLineIfNotEmpty(); _builder.append("} catch (Throwable e) {"); _builder.newLine(); _builder.append("\t"); _builder.append("if (e instanceof "); _builder.append(ExecutionException.class,"\t"); _builder.newLineIfNotEmpty(); _builder.append("\t\t"); _builder.append("|| e instanceof "); _builder.append(UncheckedExecutionException.class,"\t\t"); _builder.newLineIfNotEmpty(); _builder.append("\t\t"); _builder.append("|| e instanceof "); _builder.append(ExecutionError.class,"\t\t"); _builder.append(") {"); _builder.newLineIfNotEmpty(); _builder.append("\t\t"); _builder.append("Throwable cause = e.getCause();"); _builder.newLine(); _builder.append("\t\t"); _builder.append("throw "); _builder.append(Exceptions.class,"\t\t"); _builder.append(".sneakyThrow(cause);"); _builder.newLineIfNotEmpty(); _builder.append("\t"); _builder.append("} else {"); _builder.newLine(); _builder.append("\t\t"); _builder.append("throw "); _builder.append(Exceptions.class,"\t\t"); _builder.append(".sneakyThrow(e);"); _builder.newLineIfNotEmpty(); _builder.append("\t"); _builder.append("}"); _builder.newLine(); _builder.append("}"); _builder.newLine(); } }; return _client; }
com.google.common.util.concurrent.ExecutionList的实例源码
protected JobLauncherExecutionDriver(JobSpec jobSpec,Logger log,DriverRunnable runnable) { super(runnable); _closer = Closer.create(); _closer.register(runnable.getJobLauncher()); _log = log; _jobSpec = jobSpec; _jobExec = runnable.getJobExec(); _callbackdispatcher = _closer.register(runnable.getCallbackdispatcher()); _jobState = runnable.getJobState(); _executionList = new ExecutionList(); _runnable = runnable; }
关于Quartz 2.2多调度程序和@DisallowConcurrentExecution和quartz 调度原理的介绍已经告一段落,感谢您的耐心阅读,如果想了解更多关于@DisallowConcurrentExecution 注解的作用 【定时器执行完当前任务才开启下一个线程的方式】、coding++:@DisallowConcurrentExecution 注解的作用、com.google.common.util.concurrent.ExecutionError的实例源码、com.google.common.util.concurrent.ExecutionList的实例源码的相关信息,请在本站寻找。
本文标签: