如果您想了解configurationSELinux访问,以便Apache可以访问安装的目录和http访问linux文件的知识,那么本篇文章将是您的不二之选。我们将深入剖析configurationS
如果您想了解configurationSELinux访问,以便Apache可以访问安装的目录和http访问linux文件的知识,那么本篇文章将是您的不二之选。我们将深入剖析configurationSELinux访问,以便Apache可以访问安装的目录的各个方面,并为您解答http访问linux文件的疑在这篇文章中,我们将为您介绍configurationSELinux访问,以便Apache可以访问安装的目录的相关知识,同时也会详细的解释http访问linux文件的运用方法,并给出实际的案例分析,希望能帮助到您!
本文目录一览:- configurationSELinux访问,以便Apache可以访问安装的目录(http访问linux文件)
- org.apache.commons.configuration.AbstractConfiguration的实例源码
- org.apache.commons.configuration.AbstractFileConfiguration的实例源码
- org.apache.commons.configuration.AbstractHierarchicalFileConfiguration的实例源码
- org.apache.commons.configuration.BaseConfiguration的实例源码
configurationSELinux访问,以便Apache可以访问安装的目录(http访问linux文件)
我在/ var / www / html / ict中有一个从家里挂载的目录。 允许用户权限是好的,但仍然通过networking浏览器,我得到403错误。
SELinux我怀疑不允许来自其他位置的文件和目录。 你能帮我添加相关的权限,这样可以修复。
审计文件中的错误日志:
type=AVC msg=audit(1395610534.041:179195): avc: denied { search } for pid=18370 comm="httpd" name="upload" dev=dm-0 ino=2506938 scontext=unconfined_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=dir type=SYSCALL msg=audit(1395610534.041:179195): arch=c000003e syscall=4 success=no exit=-13 a0=7ffb5f863bc8 a1=7fff80a374c0 a2=7fff80a374c0 a3=0 items=0 ppid=3075 pid=18370 auid=0 uid=48 gid=48 euid=48 suid=48 fsuid=48 egid=48 sgid=48 fsgid=48 tty=(none) ses=1 comm="httpd" exe="/usr/sbin/httpd" subj=unconfined_u:system_r:httpd_t:s0 key=(null) type=AVC msg=audit(1395610534.043:179196): avc: denied { getattr } for pid=18370 comm="httpd" path="/var/www/html/ict/farengine" dev=dm-0 ino=2506938 scontext=unconfined_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=dir type=SYSCALL msg=audit(1395610534.043:179196): arch=c000003e syscall=6 success=no exit=-13 a0=7ffb5f863cb0 a1=7fff80a374c0 a2=7fff80a374c0 a3=1 items=0 ppid=3075 pid=18370 auid=0 uid=48 gid=48 euid=48 suid=48 fsuid=48 egid=48 sgid=48 fsgid=48 tty=(none) ses=1 comm="httpd" exe="/usr/sbin/httpd" subj=unconfined_u:system_r:httpd_t:s0 key=(null)
stream浪同步文件夹权限问题与Apache
入侵的确切含义是什么? 什么使得Spring非侵入性?
Apache POI错误加载XSSFWorkbook类
有条件地设置Apache头
如何绕过%23(散列符号)通过RewriteRule没有数据丢失?
如何在XAMPP服务器中启用GZip压缩
如何在PHP / apache服务器上部署nodejs应用程序?
Apache重写到Nginx重写
编码:一切都是UTF-8,但数据库输出显示错误。 有任何想法吗?
DRUPAL:致命错误:编辑特定现有页面时允许的内存大小错误
而不是简单地提供一个链接,而不是完全撕掉那个链接的内容,这里是跑下来。
安装包含SEMANAGE的policycoreutils-python,允许设置策略,允许Apache读取或读写DocumentRoot以外的区域。
yum install -y policycoreutils-python
文章还提到了一个故障排除软件包,但是我的机器找不到它。
在DocumentRoot之外创建作为应用程序一部分的只读区域的策略
semanage fcontext -a -t httpd_sys_content_t "/webapps(/.*)?"
创建记录目录的策略
semanage fcontext -a -t httpd_log_t "/webapps/logs(/.*)?"
为缓存目录创建策略
semanage fcontext -a -t httpd_cache_t "/webapps/cache(/.*)?"
为DocumentRoot外部的读/写区域创建策略
semanage fcontext -a -t httpd_sys_rw_content_t "/webapps/app1/public_html/uploads(/.*)?"
使用restorecon命令应用策略
restorecon -Rv /webapps
验证策略已经应用
ls -lZ /webapps
这就是简而言之。 原来的文章是更好阅读,但是。
如果你在SELinux中使用CentOS Try:
sudo restorecon -r /var/www/html
查看更多: https : //www.centos.org/forums/viewtopic.PHP?t=6834#p31548
总结
以上是小编为你收集整理的configurationSELinux访问,以便Apache可以访问安装的目录全部内容。
如果觉得小编网站内容还不错,欢迎将小编网站推荐给好友。
org.apache.commons.configuration.AbstractConfiguration的实例源码
public static AbstractConfiguration createDynamicConfig() { LOGGER.info("create dynamic config:"); ConcurrentCompositeConfiguration config = ConfigUtil.createLocalConfig(); DynamicWatchedConfiguration configFromConfigCenter = createConfigFromConfigCenter(config); if (configFromConfigCenter != null) { ConcurrentMapConfiguration injectConfig = new ConcurrentMapConfiguration(); config.addConfigurationAtFront(injectConfig,"extraInjectConfig"); duplicateServiceCombConfigToCse(configFromConfigCenter); config.addConfigurationAtFront(configFromConfigCenter,"configCenterConfig"); configFromConfigCenter.getSource().addUpdateListener(new ServiceCombPropertyUpdateListener(injectConfig)); } return config; }
@Test public void testCreateDynamicConfigNoConfigCenterSPI() { new Expectations(SPIServiceUtils.class) { { SPIServiceUtils.getTargetService(ConfigCenterConfigurationSource.class); result = null; } }; AbstractConfiguration dynamicConfig = ConfigUtil.createDynamicConfig(); MicroserviceConfigLoader loader = ConfigUtil.getMicroserviceConfigLoader(dynamicConfig); List<ConfigModel> list = loader.getConfigModels(); Assert.assertEquals(loader,ConfigUtil.getMicroserviceConfigLoader(dynamicConfig)); Assert.assertEquals(1,list.size()); Assert.assertNotEquals(DynamicWatchedConfiguration.class,((ConcurrentCompositeConfiguration) dynamicConfig).getConfiguration(0).getClass()); }
@BeforeClass public static void beforeCls() { ConfigUtil.installDynamicConfig(); AbstractConfiguration configuration = (AbstractConfiguration) DynamicPropertyFactory.getbackingConfigurationSource(); configuration.addProperty("cse.loadbalance.test.transactionControl.policy","org.apache.servicecomb.loadbalance.filter.SimpleTransactionControlFilter"); configuration.addProperty("cse.loadbalance.test.transactionControl.options.tag0","value0"); configuration.addProperty("cse.loadbalance.test.isolation.enabled","true"); configuration.addProperty("cse.loadbalance.serverListFilters","a"); configuration.addProperty("cse.loadbalance.serverListFilter.a.className","org.apache.servicecomb.loadbalance.MyServerListFilterExt"); }
@Before public void setUp() throws Exception { IsolationServerListFilter = new IsolationServerListFilter(); loadBalancerStats = new LoadBalancerStats("loadBalancer"); AbstractConfiguration configuration = (AbstractConfiguration) DynamicPropertyFactory.getbackingConfigurationSource(); configuration.clearProperty("cse.loadbalance.isolation.enabled"); configuration.addProperty("cse.loadbalance.isolation.enabled","true"); configuration.clearProperty("cse.loadbalance.isolation.enableRequestThreshold"); configuration.addProperty("cse.loadbalance.isolation.enableRequestThreshold","3"); }
@After public void tearDown() throws Exception { IsolationServerListFilter = null; loadBalancerStats = null; AbstractConfiguration configuration = (AbstractConfiguration) DynamicPropertyFactory.getbackingConfigurationSource(); configuration.clearProperty("cse.loadbalance.isolation.continuousFailureThreshold"); }
@Test public void testGetFilteredlistofServersOnContinuousFailureReachesThreshold() { ((AbstractConfiguration) DynamicPropertyFactory.getbackingConfigurationSource()) .addProperty("cse.loadbalance.isolation.continuousFailureThreshold","3"); Invocation invocation = Mockito.mock(Invocation.class); CseServer testServer = Mockito.mock(CseServer.class); Mockito.when(invocation.getMicroserviceName()).thenReturn("microserviceName"); Mockito.when(testServer.getCountinuousFailureCount()).thenReturn(3); Mockito.when(testServer.getLastVisitTime()).thenReturn(System.currentTimeMillis()); for (int i = 0; i < 3; ++i) { loadBalancerStats.incrementNumRequests(testServer); } List<Server> serverList = new ArrayList<>(); serverList.add(testServer); IsolationServerListFilter.setLoadBalancerStats(loadBalancerStats); IsolationServerListFilter.setInvocation(invocation); List<Server> returnedServerList = IsolationServerListFilter.getFilteredlistofServers(serverList); Assert.assertEquals(0,returnedServerList.size()); }
@Test public void testGetFilteredlistofServersOnContinuousFailureIsBelowThreshold() { ((AbstractConfiguration) DynamicPropertyFactory.getbackingConfigurationSource()) .addProperty("cse.loadbalance.isolation.continuousFailureThreshold","3"); Invocation invocation = Mockito.mock(Invocation.class); CseServer testServer = Mockito.mock(CseServer.class); Mockito.when(invocation.getMicroserviceName()).thenReturn("microserviceName"); Mockito.when(testServer.getCountinuousFailureCount()).thenReturn(2); Mockito.when(testServer.getLastVisitTime()).thenReturn(System.currentTimeMillis()); for (int i = 0; i < 3; ++i) { loadBalancerStats.incrementNumRequests(testServer); } List<Server> serverList = new ArrayList<>(); serverList.add(testServer); IsolationServerListFilter.setLoadBalancerStats(loadBalancerStats); IsolationServerListFilter.setInvocation(invocation); List<Server> returnedServerList = IsolationServerListFilter.getFilteredlistofServers(serverList); Assert.assertEquals(1,returnedServerList.size()); }
@BeforeClass public static void beforeCls() { ConfigUtil.installDynamicConfig(); AbstractConfiguration configuration = (AbstractConfiguration) DynamicPropertyFactory.getbackingConfigurationSource(); configuration.addProperty(REQUEST_TIMEOUT_KEY,2000); }
@Override public void init(AbstractConfiguration config,ApplicationListenerFactory factory) { init(config); logger.trace("Initializing Kafka consumer ..."); // consumer config Properties props = new Properties(); props.put("bootstrap.servers",config.getString("bootstrap.servers")); props.put("group.id",config.getString("group.id")); props.put("enable.auto.commit","true"); props.put("key.serializer",StringSerializer.class.getName()); props.put("value.serializer",InternalMessageSerializer.class.getName()); // consumer this.consumer = new KafkaConsumer<>(props); // consumer worker this.worker = new KafkaApplicationWorker(this.consumer,APPLICATION_TOPIC,factory.newListener()); this.executor.submit(this.worker); }
@Override public void init(AbstractConfiguration config,String brokerId,brokerListenerFactory factory) { init(config); broKER_TOPIC = broKER_TOPIC_PREFIX + "." + brokerId; logger.trace("Initializing Kafka consumer ..."); // consumer config Properties props = new Properties(); props.put("bootstrap.servers",UUIDs.shortUuid()); props.put("enable.auto.commit",InternalMessageSerializer.class.getName()); // consumer this.consumer = new KafkaConsumer<>(props); // consumer worker this.worker = new KafkabrokerWorker(this.consumer,broKER_TOPIC,factory.newListener()); this.executor.submit(this.worker); }
protected void init(AbstractConfiguration config) { broKER_TOPIC_PREFIX = config.getString("communicator.broker.topic"); APPLICATION_TOPIC = config.getString("communicator.application.topic"); logger.trace("Initializing Kafka producer ..."); // producer config Properties props = new Properties(); props.put("bootstrap.servers",config.getString("bootstrap.servers")); props.put("acks",config.getString("acks")); props.put("key.serializer",InternalMessageSerializer.class.getName()); // producer this.producer = new KafkaProducer<>(props); // consumer executor this.executor = Executors.newSingleThreadExecutor(); }
@Override public void init(AbstractConfiguration config) { if (!config.getString("redis.type").equals("single")) { throw new IllegalStateException("RedisSyncSingleStorage class can only be used with single redis setup,but redis.type value is " + config.getString("redis.type")); } List<String> address = parseRedisAddress(config.getString("redis.address"),6379); int databaseNumber = config.getInt("redis.database",0); String password = StringUtils.isNotEmpty(config.getString("redis.password")) ? config.getString("redis.password") + "@" : ""; // lettuce RedisURI lettuceURI = RedisURI.create("redis://" + password + address.get(0) + "/" + databaseNumber); this.lettuce = RedisClient.create(lettuceURI); this.lettuceConn = this.lettuce.connect(); // redisson Config redissonConfig = new Config(); redissonConfig.useSingleServer() .setAddress(address.get(0)) .setDatabase(databaseNumber) .setPassword(StringUtils.isNotEmpty(password) ? password : null); this.redisson = Redisson.create(redissonConfig); // params initParams(config); }
@Override public void init(AbstractConfiguration config,String serverId) { try { ConnectionFactory cf = new ConnectionFactory(); cf.setUsername(config.getString("rabbitmq.userName",ConnectionFactory.DEFAULT_USER)); cf.setPassword(config.getString("rabbitmq.password",ConnectionFactory.DEFAULT_PASS)); cf.setVirtualHost(config.getString("rabbitmq.virtualHost",ConnectionFactory.DEFAULT_VHOST)); cf.setAutomaticRecoveryEnabled(true); cf.setExceptionHandler(new RabbitMQExceptionHandler()); this.conn = cf.newConnection(Address.parseAddresses(config.getString("rabbitmq.addresses"))); this.channel = conn.createChannel(); logger.trace("Initializing RabbitMQ broker resources ..."); broKER_TOPIC_PREFIX = config.getString("communicator.broker.topic"); logger.trace("Initializing RabbitMQ application resources ..."); APPLICATION_TOPIC = config.getString("communicator.application.topic"); this.channel.exchangeDeclare(APPLICATION_TOPIC,"topic",true); } catch (IOException | TimeoutException e) { logger.error("Failed to connect to RabbitMQ servers",e); throw new IllegalStateException("Init RabbitMQ communicator Failed"); } }
/** * Attempts to acquire the Vault URL from Archaius. * * @return Vault URL */ @Nullable @Override public String resolve() { final AbstractConfiguration configuration = ConfigurationManager.getConfigInstance(); final String envUrl = configuration.getString(CERBERUS_ADDR_ENV_PROPERTY); final String sysUrl = configuration.getString(CERBERUS_ADDR_SYS_PROPERTY); if (StringUtils.isNotBlank(envUrl) && HttpUrl.parse(envUrl) != null) { return envUrl; } else if (StringUtils.isNotBlank(sysUrl) && HttpUrl.parse(sysUrl) != null) { return sysUrl; } logger.warn("Unable to resolve the Cerberus URL."); return null; }
@Override public Config createConfig(String name) { if (CONfig != null) { return CONfig; } synchronized (ArchaiusBaseFactory.class) { if (CONfig == null) { AbstractConfiguration configuration = getConfiguration(); ConfigurationManager.install(configuration); CONfig = new ArchaiusWrapper(configuration); ConfigFactory.setContext(CONfig); configuration.addConfigurationListener(event -> { if (!event.isBeforeUpdate()) { CONfig.fire(event.getPropertyName()); } }); } } return CONfig; }
@Override public void init(AbstractConfiguration config) { if (!config.getString("redis.type").equals("single")) { throw new IllegalStateException("RedisSyncSingleStorageImpl class can only be used with single redis setup,0); String password = StringUtils.isNotEmpty(config.getString("redis.password")) ? config.getString("redis.password") + "@" : ""; // lettuce RedisURI lettuceURI = RedisURI.create("redis://" + password + address.get(0) + "/" + databaseNumber); this.lettuce = RedisClient.create(lettuceURI); this.lettuceConn = this.lettuce.connect(); // params initParams(config); }
@Override public void init(AbstractConfiguration config) { if (!config.getString("redis.type").equals("sentinel")) { throw new IllegalStateException("RedisSyncSingleStorageImpl class can only be used with sentinel redis setup,26379); int databaseNumber = config.getInt("redis.database",0); String password = StringUtils.isNotEmpty(config.getString("redis.password")) ? config.getString("redis.password") + "@" : ""; String masterId = config.getString("redis.master"); // lettuce RedisURI lettuceURI = RedisURI.create("redis-sentinel://" + password + String.join(",",address) + "/" + databaseNumber + "#" + masterId); this.lettuceSentinel = RedisClient.create(lettuceURI); this.lettuceSentinelConn = MasterSlave.connect(this.lettuceSentinel,new Utf8StringCodec(),lettuceURI); this.lettuceSentinelConn.setReadFrom(ReadFrom.valueOf(config.getString("redis.read"))); // params initParams(config); }
@Override public void init(AbstractConfiguration config) { if (!config.getString("redis.type").equals("master_slave")) { throw new IllegalStateException("RedisSyncSingleStorageImpl class can only be used with master slave redis setup,0); String password = StringUtils.isNotEmpty(config.getString("redis.password")) ? config.getString("redis.password") + "@" : ""; // lettuce RedisURI lettuceURI = RedisURI.create("redis://" + password + address.get(0) + "/" + databaseNumber); this.lettuceMasterSlave = RedisClient.create(lettuceURI); this.lettuceMasterSlaveConn = MasterSlave.connect(this.lettuceMasterSlave,lettuceURI); this.lettuceMasterSlaveConn.setReadFrom(ReadFrom.valueOf(config.getString("redis.read"))); // params initParams(config); }
/** Create a new instance. */ @Inject Plugin(Registry registry) throws IOException { AbstractConfiguration config = ConfigurationManager.getConfigInstance(); final boolean enabled = config.getBoolean(ENABLED_PROP,true); if (enabled) { ConfigurationManager.loadPropertiesFromresources(CONfig_FILE); if (config.getBoolean("spectator.gc.loggingEnabled")) { GC_LOGGER.start(null); LOGGER.info("gc logging started"); } else { LOGGER.info("gc logging is not enabled"); } Jmx.registerStandardMXBeans(registry); } else { LOGGER.debug("plugin not enabled,set " + ENABLED_PROP + "=true to enable"); } }
@Override protected void doGet(HttpServletRequest req,HttpServletResponse resp) throws servletexception,IOException { // get list of properties TreeSet<String> properties = new TreeSet<String>(); AbstractConfiguration config = ConfigurationManager.getConfigInstance(); Iterator<String> keys = config.getKeys(); while (keys.hasNext()) { String key = keys.next(); Object value = config.getProperty(key); if ("aws.accessId".equals(key) || "aws.secretKey".equals(key) || "experiments-service.secret".equals(key) || "java.class.path".equals(key) || key.contains("framework.securityDeFinition") || key.contains("password") || key.contains("secret")) { value = "*****"; } properties.add(key + "=" + value.toString()); } // write them out in sorted order for (String line : properties) { resp.getWriter().append(line).println(); } }
@Override public void onApplicationEvent(ContextRefreshedEvent event) { logger.debug("Received ContextRefreshedEvent {}",event); if (event.getSource().equals(getBootstrapApplicationContext())) { //the root context is fully started appMetadata = bootstrapApplicationContext.getBean(AppMetadata.class); configuration = bootstrapApplicationContext.getBean(AbstractConfiguration.class); configurationProvider = bootstrapApplicationContext.getBean(ConfigurationProvider.class); logger.debug("Root context started"); initClientApplication(); return; } if (event.getSource() instanceof ApplicationContext && ((ApplicationContext) event.getSource()).getId().equals(appMetadata.getName())) { //the child context is fully started this.applicationContext = (AbstractApplicationContext) event.getSource(); logger.debug("Child context started"); } state.compareAndSet(State.STARTING,State.RUNNING); }
@Bean @SuppressWarnings("resource") public AbstractConfiguration applicationConfiguration() throws ClassNotFoundException { AppMetadata appMetadata = appMetadata(); ServerInstanceContext serverInstanceContext = serverInstanceContext(); if(appEnvironment == null && serverInstanceContext != null){ appEnvironment = serverInstanceContext.getEnvironment(); } ConfigurationBuilder configurationBuilder = new ConfigurationBuilder(appMetadata.getName(),appEnvironment,addSystemConfigs,reflections()); configurationBuilder.withConfigurationProvider(configurationProvider()); configurationBuilder.withServerInstanceContext(serverInstanceContext()); configurationBuilder.withApplicationProperties(appMetadata.getPropertiesResourceLocation()); configurationBuilder.withScanModuleConfigurations(scanModuleConfigurations); configurationBuilder.withAppVersion(appMetadata.getDeclaringClass().getPackage().getImplementationVersion()); AbstractConfiguration configuration = configurationBuilder.build(); if(serverInstanceContext != null){ serverInstanceContext.setAppName(appMetadata.getName()); serverInstanceContext.setVersion(configuration.getString(BootstrapConfigKeys.APP_VERSION_KEY.getPropertyName())); } return configuration; }
@Override public void onApplicationEvent(EnvironmentChangeEvent event) { AbstractConfiguration manager = ConfigurationManager.getConfigInstance(); for (String key : event.getKeys()) { for (ConfigurationListener listener : manager .getConfigurationListeners()) { Object source = event.getSource(); // Todo: Handle add vs set vs delete? int type = AbstractConfiguration.EVENT_SET_PROPERTY; String value = this.env.getProperty(key); boolean beforeUpdate = false; listener.configurationChanged(new ConfigurationEvent(source,type,key,value,beforeUpdate)); } } }
private static void addArchaiusConfiguration(ConcurrentCompositeConfiguration config) { if (ConfigurationManager.isConfigurationInstalled()) { AbstractConfiguration installedConfiguration = ConfigurationManager .getConfigInstance(); if (installedConfiguration instanceof ConcurrentCompositeConfiguration) { ConcurrentCompositeConfiguration configInstance = (ConcurrentCompositeConfiguration) installedConfiguration; configInstance.addConfiguration(config); } else { installedConfiguration.append(config); if (!(installedConfiguration instanceof AggregatedConfiguration)) { log.warn( "Appending a configuration to an existing non-aggregated installed configuration will have no effect"); } } } else { ConfigurationManager.install(config); } }
@Test public void testSunnyDayNoClientAuth() throws Exception{ AbstractConfiguration cm = ConfigurationManager.getConfigInstance(); String name = "GetPostSecureTest" + ".testSunnyDayNoClientAuth"; String configPrefix = name + "." + "ribbon"; cm.setProperty(configPrefix + "." + CommonClientConfigKey.IsSecure,"true"); cm.setProperty(configPrefix + "." + CommonClientConfigKey.SecurePort,Integer.toString(PORT2)); cm.setProperty(configPrefix + "." + CommonClientConfigKey.IsHostnameValidationrequired,"false"); cm.setProperty(configPrefix + "." + CommonClientConfigKey.TrustStore,FILE_TS2.getAbsolutePath()); cm.setProperty(configPrefix + "." + CommonClientConfigKey.TrustStorePassword,PASSWORD); RestClient rc = (RestClient) ClientFactory.getNamedClient(name); testServer2.accept(); URI getUri = new URI(SERVICE_URI2 + "test/"); HttpRequest request = HttpRequest.newBuilder().uri(getUri).queryParams("name","test").build(); HttpResponse response = rc.execute(request); assertEquals(200,response.getStatus()); }
@Before public void setUp() { AbstractConfiguration.setDefaultListDelimiter(','); clearTestSystemProperties(); this.configurationHelper = new ConfigurationHelper(); this.test1Properties = new HashMap<String,String>() {{ this.put("a.b.c","efgh"); this.put("a.b.d","1234"); }}; this.test3Properties = new HashMap<String,"jklm"); this.put("e.f.h","90123"); // The value in the file is "foo,bar" but AbstractConfiguration.getString(key) only returns // the first item in a collection. this.put("i.j.k","foo"); }}; }
private void invokeListeners() { if (m_configurationListeners != null) { try { ConfigurationEvent event = new ConfigurationEvent(this,AbstractConfiguration.EVENT_SET_PROPERTY,null,this,false); for (ConfigurationListener listener:m_configurationListeners) { listener.configurationChanged(event); } } catch (Exception e) { throw new RuntimeException(e); } } }
public static AbstractConfiguration convertEnvVariable(AbstractConfiguration source) { Iterator<String> keys = source.getKeys(); while (keys.hasNext()) { String key = keys.next(); String[] separatedKey = key.split(CONfig_KEY_SPLITER); if (separatedKey.length == 1) { continue; } String newKey = String.join(".",separatedKey); source.addProperty(newKey,source.getProperty(key)); } return source; }
private static void duplicateServiceCombConfigToCse(AbstractConfiguration source) { Iterator<String> keys = source.getKeys(); while (keys.hasNext()) { String key = keys.next(); if (!key.startsWith(CONfig_SERVICECOMB_PREFIX)) { continue; } String cseKey = CONfig_CSE_PREFIX + key.substring(key.indexOf(".") + 1); source.addProperty(cseKey,source.getProperty(key)); } }
private static void duplicateServiceCombConfigToCse(ConcurrentCompositeConfiguration compositeConfiguration,AbstractConfiguration source,String sourceName) { duplicateServiceCombConfigToCse(source); compositeConfiguration.addConfiguration(source,sourceName); }
public static void installDynamicConfig() { if (ConfigurationManager.isConfigurationInstalled()) { LOGGER.warn("Configuration installed by others,will ignore this configuration."); return; } AbstractConfiguration dynamicConfig = ConfigUtil.createDynamicConfig(); ConfigurationManager.install(dynamicConfig); }
@Override protected Properties mergeProperties() throws IOException { Properties properties = super.mergeProperties(); AbstractConfiguration config = ConfigurationManager.getConfigInstance(); Iterator<String> iter = config.getKeys(); while (iter.hasNext()) { String key = iter.next(); Object value = config.getProperty(key); properties.put(key,value); } return properties; }
@Test public void testCreateDynamicConfigHasConfigCenter( @Mocked ConfigCenterConfigurationSource configCenterConfigurationSource) { AbstractConfiguration dynamicConfig = ConfigUtil.createDynamicConfig(); Assert.assertEquals(DynamicWatchedConfiguration.class,((ConcurrentCompositeConfiguration) dynamicConfig).getConfiguration(0).getClass()); }
@Test public void duplicateServiceCombConfigToCseListValue() throws Exception { List<String> list = Arrays.asList("a","b"); AbstractConfiguration config = new DynamicConfiguration(); config.addProperty("servicecomb.list",list); Deencapsulation.invoke(ConfigUtil.class,"duplicateServiceCombConfigToCse",config); Object result = config.getProperty("cse.list"); assertthat(result,instanceOf(List.class)); assertthat(result,equalTo(list)); }
@Test public void testConvertEnvVariable() { String someProperty = "cse_service_registry_address"; AbstractConfiguration config = new DynamicConfiguration(); config.addProperty(someProperty,"testing"); AbstractConfiguration result = ConfigUtil.convertEnvVariable(config); assertthat(result.getString("cse.service.registry.address"),equalTo("testing")); assertthat(result.getString("cse_service_registry_address"),equalTo("testing")); }
@Test public void testCreateMicroserviceInstanceFromFile() { AbstractConfiguration config = ConfigUtil.createDynamicConfig(); ConcurrentCompositeConfiguration configuration = new ConcurrentCompositeConfiguration(); configuration.addConfiguration(config); ConfigurationManager.install(configuration); MicroserviceInstance instance = MicroserviceInstance.createFromDeFinition(config); Assert.assertEquals(instance.getDataCenterInfo().getName(),"myDC"); Assert.assertEquals(instance.getDataCenterInfo().getRegion(),"my-Region"); Assert.assertEquals(instance.getDataCenterInfo().getAvailableZone(),"my-Zone"); }
@BeforeClass public static void initSetup() throws Exception { AbstractConfiguration dynamicConfig = ConfigUtil.createDynamicConfig(); ConcurrentCompositeConfiguration configuration = new ConcurrentCompositeConfiguration(); configuration.addConfiguration(dynamicConfig); configuration.addConfiguration(inMemoryConfig); ConfigurationManager.install(configuration); }
@BeforeClass public static void beforeCls() { AbstractConfiguration configuration = new BaseConfiguration(); configuration.addProperty("cse.loadbalance.test.flowsplitFilter.policy","org.apache.servicecomb.loadbalance.filter.SimpleFlowsplitFilter"); configuration.addProperty("cse.loadbalance.test.flowsplitFilter.options.tag0","value0"); }
@Override public void init(AbstractConfiguration config) { if (!config.getString("redis.type").equals("master_slave")) { throw new IllegalStateException("RedisSyncSingleStorage class can only be used with master slave redis setup,lettuceURI); this.lettuceMasterSlaveConn.setReadFrom(ReadFrom.valueOf(config.getString("redis.read"))); // redisson String masterNode = address.get(0); String[] slaveNodes = address.subList(1,address.size()).toArray(new String[address.size() - 1]); Config redissonConfig = new Config(); redissonConfig.useMasterSlaveServers() .setMasteraddress(masterNode) .setLoadBalancer(new RoundRobinLoadBalancer()) .addSlaveAddress(slaveNodes) .setReadMode(ReadMode.MASTER) .setDatabase(databaseNumber) .setPassword(StringUtils.isNotEmpty(password) ? password : null); this.redisson = Redisson.create(redissonConfig); // params initParams(config); }
@Override public void init(AbstractConfiguration config) { if (!config.getString("redis.type").equals("cluster")) { throw new IllegalStateException("RedisSyncSingleStorage class can only be used with cluster redis setup,0); String password = StringUtils.isNotEmpty(config.getString("redis.password")) ? config.getString("redis.password") + "@" : ""; // lettuce RedisURI lettuceURI = RedisURI.create("redis://" + password + address.get(0) + "/" + databaseNumber); this.lettuceCluster = RedisClusterClient.create(lettuceURI); this.lettuceCluster.setoptions(new ClusterClientOptions.Builder() .refreshClusterView(true) .refreshPeriod(1,TimeUnit.MINUTES) .build()); this.lettuceClusterConn = this.lettuceCluster.connect(); this.lettuceClusterConn.setReadFrom(ReadFrom.valueOf(config.getString("redis.read"))); // redisson Config redissonConfig = new Config(); redissonConfig.useClusterServers() .setScanInterval(60000) .addNodeAddress(address.toArray(new String[address.size()])) .setReadMode(ReadMode.MASTER) .setPassword(StringUtils.isNotEmpty(password) ? password : null); this.redisson = Redisson.create(redissonConfig); // params initParams(config); }
org.apache.commons.configuration.AbstractFileConfiguration的实例源码
@Override public void configurationChanged(ConfigurationEvent event) { if (event.isBeforeUpdate()) { return; } logger.log(OpLevel.DEBUG,"configurationChanged: type={0},{1}:{2}",event.getType(),event.getPropertyName(),event.getPropertyValue()); switch (event.getType()) { case AbstractConfiguration.EVENT_ADD_PROPERTY: repListener.repositoryChanged(new TokenRepositoryEvent(event.getSource(),TokenRepository.EVENT_ADD_KEY,event.getPropertyValue(),null)); break; case AbstractConfiguration.EVENT_SET_PROPERTY: repListener.repositoryChanged(new TokenRepositoryEvent(event.getSource(),TokenRepository.EVENT_SET_KEY,null)); break; case AbstractConfiguration.EVENT_CLEAR_PROPERTY: repListener.repositoryChanged(new TokenRepositoryEvent(event.getSource(),TokenRepository.EVENT_CLEAR_KEY,null)); break; case AbstractConfiguration.EVENT_CLEAR: repListener.repositoryChanged(new TokenRepositoryEvent(event.getSource(),TokenRepository.EVENT_CLEAR,null)); break; case AbstractFileConfiguration.EVENT_RELOAD: repListener.repositoryChanged(new TokenRepositoryEvent(event.getSource(),TokenRepository.EVENT_RELOAD,null)); break; } }
@Test(dataProvider = "overrides") public void testFindConfigFile(String override,String expected) throws Exception { SshdSettingsBuilder testBuilder = new SshdSettingsBuilder(); Configuration config = testBuilder.findPropertiesConfiguration(override); AbstractFileConfiguration fileConfiguration = (AbstractFileConfiguration) config; // we need to create expected from a new file. // because it's a complete filename. File expectedFile = new File(expected); String expectedpath = "file://" + expectedFile.getAbsolutePath(); Assert.assertEquals(fileConfiguration.getFileName(),expectedpath); }
public void configurationChanged(ConfigurationEvent event) { if (!event.isBeforeUpdate() && event.getType() == AbstractFileConfiguration.EVENT_RELOAD) { if (m_applicationContext instanceof AbstractRefreshableApplicationContext) { ((AbstractRefreshableApplicationContext)m_applicationContext).refresh(); } } }
org.apache.commons.configuration.AbstractHierarchicalFileConfiguration的实例源码
@Override public void saveto(OutputStream output) throws Exception { if (mConfiguration instanceof AbstractHierarchicalFileConfiguration) { ((AbstractHierarchicalFileConfiguration)mConfiguration).save(output); } else { throw new BadConfigException("Configuration not AbstractHierarchicalFileConfiguration!"); } }
org.apache.commons.configuration.BaseConfiguration的实例源码
@Override public Tinkergraph deserialize(final JsonParser jsonParser,final DeserializationContext deserializationContext) throws IOException,JsonProcessingException { final Configuration conf = new BaseConfiguration(); conf.setProperty("gremlin.tinkergraph.defaultVertexPropertyCardinality","list"); final Tinkergraph graph = Tinkergraph.open(conf); while (jsonParser.nextToken() != JsonToken.END_OBJECT) { if (jsonParser.getCurrentName().equals("vertices")) { while (jsonParser.nextToken() != JsonToken.END_ARRAY) { if (jsonParser.currentToken() == JsonToken.START_OBJECT) { final DetachedVertex v = (DetachedVertex) deserializationContext.readValue(jsonParser,Vertex.class); v.attach(Attachable.Method.getorCreate(graph)); } } } else if (jsonParser.getCurrentName().equals("edges")) { while (jsonParser.nextToken() != JsonToken.END_ARRAY) { if (jsonParser.currentToken() == JsonToken.START_OBJECT) { final DetachedEdge e = (DetachedEdge) deserializationContext.readValue(jsonParser,Edge.class); e.attach(Attachable.Method.getorCreate(graph)); } } } } return graph; }
@Override public Tinkergraph deserialize(final JsonParser jsonParser,Edge.class); e.attach(Attachable.Method.getorCreate(graph)); } } } } return graph; }
@Test public void shouldPersistToGraphML() { final String graphLocation = TestHelper.makeTestDataDirectory(TinkergraphTest.class) + "shouldPersistToGraphML.xml"; final File f = new File(graphLocation); if (f.exists() && f.isFile()) f.delete(); final Configuration conf = new BaseConfiguration(); conf.setProperty(Tinkergraph.GREMLIN_TINKErgraph_GRAPH_FORMAT,"graphml"); conf.setProperty(Tinkergraph.GREMLIN_TINKErgraph_GRAPH_LOCATION,graphLocation); final Tinkergraph graph = Tinkergraph.open(conf); TinkerFactory.generateModern(graph); graph.close(); final Tinkergraph reloadedGraph = Tinkergraph.open(conf); IoTest.assertModernGraph(reloadedGraph,true,true); reloadedGraph.close(); }
@Test public void shouldPersistToGraphSON() { final String graphLocation = TestHelper.makeTestDataDirectory(TinkergraphTest.class) + "shouldPersistToGraphSON.json"; final File f = new File(graphLocation); if (f.exists() && f.isFile()) f.delete(); final Configuration conf = new BaseConfiguration(); conf.setProperty(Tinkergraph.GREMLIN_TINKErgraph_GRAPH_FORMAT,"graphson"); conf.setProperty(Tinkergraph.GREMLIN_TINKErgraph_GRAPH_LOCATION,false); reloadedGraph.close(); }
@Test public void shouldPersistToGryo() { final String graphLocation = TestHelper.makeTestDataDirectory(TinkergraphTest.class) + "shouldPersistToGryo.kryo"; final File f = new File(graphLocation); if (f.exists() && f.isFile()) f.delete(); final Configuration conf = new BaseConfiguration(); conf.setProperty(Tinkergraph.GREMLIN_TINKErgraph_GRAPH_FORMAT,"gryo"); conf.setProperty(Tinkergraph.GREMLIN_TINKErgraph_GRAPH_LOCATION,false); reloadedGraph.close(); }
@Test public void shouldPersistToGryoAndHandleMultiProperties() { final String graphLocation = TestHelper.makeTestDataDirectory(TinkergraphTest.class) + "shouldPersistToGryoMulti.kryo"; final File f = new File(graphLocation); if (f.exists() && f.isFile()) f.delete(); final Configuration conf = new BaseConfiguration(); conf.setProperty(Tinkergraph.GREMLIN_TINKErgraph_GRAPH_FORMAT,graphLocation); final Tinkergraph graph = Tinkergraph.open(conf); TinkerFactory.generateTheCrew(graph); graph.close(); conf.setProperty(Tinkergraph.GREMLIN_TINKErgraph_DEFAULT_VERTEX_PROPERTY_CARDINALITY,VertexProperty.Cardinality.list.toString()); final Tinkergraph reloadedGraph = Tinkergraph.open(conf); IoTest.assertCrewGraph(reloadedGraph,false); reloadedGraph.close(); }
@Test public void shouldPersistWithRelativePath() { final String graphLocation = TestHelper.convertToRelative(TinkergraphTest.class,new File(TestHelper.makeTestDataDirectory(TinkergraphTest.class))) + "shouldPersistToGryoRelative.kryo"; final File f = new File(graphLocation); if (f.exists() && f.isFile()) f.delete(); final Configuration conf = new BaseConfiguration(); conf.setProperty(Tinkergraph.GREMLIN_TINKErgraph_GRAPH_FORMAT,false); reloadedGraph.close(); }
public DataStore(Configuration conf) throws QonduitException { try { final BaseConfiguration apacheConf = new BaseConfiguration(); Configuration.Accumulo accumuloConf = conf.getAccumulo(); apacheConf.setProperty("instance.name",accumuloConf.getInstanceName()); apacheConf.setProperty("instance.zookeeper.host",accumuloConf.getZookeepers()); final ClientConfiguration aconf = new ClientConfiguration(Collections.singletonList(apacheConf)); final Instance instance = new ZooKeeperInstance(aconf); connector = instance .getConnector(accumuloConf.getUsername(),new PasswordToken(accumuloConf.getpassword())); } catch (Exception e) { throw new QonduitException(HttpResponseStatus.INTERNAL_SERVER_ERROR.code(),"Error creating DataStoreImpl",e.getMessage(),e); } }
@Override public Configuration configuration() { if (this.origConfig != null) { return this.origConfig; } else { Configuration ans = new BaseConfiguration(); ans.setProperty(DB_PATH_KEY,dbPath.toString()); ans.setProperty(ALLOW_FULL_GRAPH_SCANS_KEY,allowFullGraphScans); ans.setProperty(DEFAULT_ISOLATION_LEVEL_KEY,defaultIsolationLevel.toString()); ans.setProperty(TX_LOG_THRESHOLD_KEY,getTxLogThreshold()); ans.setProperty(REORG_FACTOR_KEY,getReorgFactor()); ans.setProperty(CREATE_DIR_IF_MISSING_KEY,createDirIfMissing); ans.setProperty(VERTEX_INDICES_KEY,String.join(",",getIndexedKeys(Vertex.class))); ans.setProperty(EDGE_INDICES_KEY,getIndexedKeys(Vertex.class))); return ans; } }
public static void main(String[] args) throws Exception { try (ConfigurableApplicationContext ctx = new SpringApplicationBuilder(SpringBootstrap.class) .bannerMode(Mode.OFF).web(false).run(args)) { Configuration conf = ctx.getBean(Configuration.class); final BaseConfiguration apacheConf = new BaseConfiguration(); Configuration.Accumulo accumuloConf = conf.getAccumulo(); apacheConf.setProperty("instance.name",accumuloConf.getZookeepers()); final ClientConfiguration aconf = new ClientConfiguration(Collections.singletonList(apacheConf)); final Instance instance = new ZooKeeperInstance(aconf); Connector con = instance.getConnector(accumuloConf.getUsername(),new PasswordToken(accumuloConf.getpassword())); Scanner s = con.createScanner(conf.getMetaTable(),con.securityOperations().getUserAuthorizations(con.whoami())); try { s.setRange(new Range(Meta.METRIC_PREFIX,Meta.TAG_PREFIX,false)); for (Entry<Key,Value> e : s) { System.out.println(e.getKey().getRow().toString().substring(Meta.METRIC_PREFIX.length())); } } finally { s.close(); } } }
@Test public void shouldConfigPoolOnConstructionWithPoolSizeOneAndNoIoRegistry() throws Exception { final Configuration conf = new BaseConfiguration(); final GryoPool pool = GryoPool.build().poolSize(1).ioRegistries(conf.getList(GryoPool.CONfig_IO_REGISTRY,Collections.emptyList())).create(); final GryoReader reader = pool.takeReader(); final GryoWriter writer = pool.takeWriter(); pool.offerReader(reader); pool.offerWriter(writer); for (int ix = 0; ix < 100; ix++) { final GryoReader r = pool.takeReader(); final GryoWriter w = pool.takeWriter(); assertReaderWriter(w,r,1,Integer.class); // should always return the same original instance assertEquals(reader,r); assertEquals(writer,w); pool.offerReader(r); pool.offerWriter(w); } }
@Override public Configuration newGraphConfiguration(final String graphName,final Class<?> test,final String testMethodName,final Map<String,Object> configurationOverrides,final LoadGraphWith.GraphData loadGraphWith) { final Configuration conf = new BaseConfiguration(); getBaseConfiguration(graphName,test,testMethodName,loadGraphWith).entrySet().stream() .forEach(e -> conf.setProperty(e.getKey(),e.getValue())); // assign overrides but don't allow gremlin.graph setting to be overridden. the test suite should // not be able to override that. configurationOverrides.entrySet().stream() .filter(c -> !c.getKey().equals(Graph.GRAPH)) .forEach(e -> conf.setProperty(e.getKey(),e.getValue())); return conf; }
@Test public void shouldPersistToGraphML() { final String graphLocation = TestHelper.makeTestDataDirectory(TinkergraphTest.class) + "shouldPersistToGraphML.xml"; final File f = new File(graphLocation); if (f.exists() && f.isFile()) f.delete(); final Configuration conf = new BaseConfiguration(); conf.setProperty(Tinkergraph.GREMLIN_TINKErgraph_GRAPH_FORMAT,true); reloadedGraph.close(); }
@Test public void shouldPersistToGraphSON() { final String graphLocation = TestHelper.makeTestDataDirectory(TinkergraphTest.class) + "shouldPersistToGraphSON.json"; final File f = new File(graphLocation); if (f.exists() && f.isFile()) f.delete(); final Configuration conf = new BaseConfiguration(); conf.setProperty(Tinkergraph.GREMLIN_TINKErgraph_GRAPH_FORMAT,false); reloadedGraph.close(); }
@Test public void shouldPersistToGryo() { final String graphLocation = TestHelper.makeTestDataDirectory(TinkergraphTest.class) + "shouldPersistToGryo.kryo"; final File f = new File(graphLocation); if (f.exists() && f.isFile()) f.delete(); final Configuration conf = new BaseConfiguration(); conf.setProperty(Tinkergraph.GREMLIN_TINKErgraph_GRAPH_FORMAT,false); reloadedGraph.close(); }
@Override public Iterator<Vertex> head(final String location,final Class readerClass,final int totalLines) { final Configuration configuration = new BaseConfiguration(); configuration.setProperty(Constants.GREMLIN_HADOOP_INPUT_LOCATION,location); configuration.setProperty(Constants.GREMLIN_HADOOP_GRAPH_READER,readerClass.getCanonicalName()); try { if (InputRDD.class.isAssignableFrom(readerClass)) { return IteratorUtils.map(((InputRDD) readerClass.getConstructor().newInstance()).readGraphRDD(configuration,new JavaSparkContext(Spark.getContext())).take(totalLines).iterator(),tuple -> tuple._2().get()); } else if (InputFormat.class.isAssignableFrom(readerClass)) { return IteratorUtils.map(new InputFormatRDD().readGraphRDD(configuration,tuple -> tuple._2().get()); } } catch (final Exception e) { throw new IllegalArgumentException(e.getMessage(),e); } throw new IllegalArgumentException("The provided parserClass must be an " + InputFormat.class.getCanonicalName() + " or an " + InputRDD.class.getCanonicalName() + ": " + readerClass.getCanonicalName()); }
@Override public <K,V> Iterator<keyvalue<K,V>> head(final String location,final String memoryKey,readerClass.getCanonicalName()); try { if (InputRDD.class.isAssignableFrom(readerClass)) { return IteratorUtils.map(((InputRDD) readerClass.getConstructor().newInstance()).readMemoryRDD(configuration,memoryKey,tuple -> new keyvalue(tuple._1(),tuple._2())); } else if (InputFormat.class.isAssignableFrom(readerClass)) { return IteratorUtils.map(new InputFormatRDD().readMemoryRDD(configuration,tuple._2())); } } catch (final Exception e) { throw new IllegalArgumentException(e.getMessage(),e); } throw new IllegalArgumentException("The provided parserClass must be an " + InputFormat.class.getCanonicalName() + " or an " + InputRDD.class.getCanonicalName() + ": " + readerClass.getCanonicalName()); }
@Test public void shouldWritetoArbitraryRDD() throws Exception { final Configuration configuration = new BaseConfiguration(); configuration.setProperty("spark.master","local[4]"); configuration.setProperty("spark.serializer",GryoSerializer.class.getCanonicalName()); configuration.setProperty(Graph.GRAPH,HadoopGraph.class.getName()); configuration.setProperty(Constants.GREMLIN_HADOOP_INPUT_LOCATION,SparkHadoopGraPHProvider.PATHS.get("tinkerpop-modern.kryo")); configuration.setProperty(Constants.GREMLIN_HADOOP_GRAPH_READER,GryoInputFormat.class.getCanonicalName()); configuration.setProperty(Constants.GREMLIN_HADOOP_GRAPH_WRITER,ExampleOutputRDD.class.getCanonicalName()); configuration.setProperty(Constants.GREMLIN_HADOOP_OUTPUT_LOCATION,TestHelper.makeTestDataDirectory(this.getClass(),"shouldWritetoArbitraryRDD")); configuration.setProperty(Constants.GREMLIN_HADOOP_JARS_IN_distributeD_CACHE,false); //////// Graph graph = GraphFactory.open(configuration); graph.compute(SparkGraphComputer.class) .result(GraphComputer.ResultGraph.NEW) .persist(GraphComputer.Persist.EDGES) .program(TraversalVertexProgram.build() .traversal(graph.traversal().withComputer(Computer.compute(SparkGraphComputer.class)),"gremlin-groovy","g.V()").create(graph)).submit().get(); }
@Test public void shouldSupportHadoopGraphOLTP() { final Configuration configuration = new BaseConfiguration(); configuration.setProperty("spark.master",HadoopGraph.class.getName()); configuration.setProperty(Constants.GREMLIN_HADOOP_GRAPH_READER,ExampleInputRDD.class.getCanonicalName()); configuration.setProperty(Constants.GREMLIN_HADOOP_GRAPH_WRITER,GryoOutputFormat.class.getCanonicalName()); configuration.setProperty(Constants.GREMLIN_HADOOP_OUTPUT_LOCATION,"shouldSupportHadoopGraphOLTP")); configuration.setProperty(Constants.GREMLIN_HADOOP_JARS_IN_distributeD_CACHE,false); //////// Graph graph = GraphFactory.open(configuration); GraphTraversalSource g = graph.traversal(); // OLTP; assertEquals("person",g.V().has("age",29).next().label()); assertEquals(Long.valueOf(4),g.V().count().next()); assertEquals(Long.valueOf(0),g.E().count().next()); assertEquals(Long.valueOf(2),P.gt(30)).count().next()); }
@Test public void shouldReadFromWritetoArbitraryRDD() throws Exception { final Configuration configuration = new BaseConfiguration(); configuration.setProperty("spark.master","shouldReadFromWritetoArbitraryRDD")); configuration.setProperty(Constants.GREMLIN_HADOOP_JARS_IN_distributeD_CACHE,false); //////// Graph graph = GraphFactory.open(configuration); graph.compute(SparkGraphComputer.class) .result(GraphComputer.ResultGraph.NEW) .persist(GraphComputer.Persist.EDGES) .program(TraversalVertexProgram.build() .traversal(graph.traversal().withComputer(SparkGraphComputer.class),"g.V()").create(graph)).submit().get(); }
public Configuration build() { // create configuration instance Configuration configuration = new BaseConfiguration(); // url configuration.setProperty(Neo4JURLconfigurationKey,"bolt://" + hostname + ":" + port); // hostname configuration.setProperty(Neo4JHostnameConfigurationKey,hostname); // port configuration.setProperty(Neo4JPortConfigurationKey,port); // username configuration.setProperty(Neo4JUsernameConfigurationKey,username); // password configuration.setProperty(Neo4JPasswordConfigurationKey,password); // graphName configuration.setProperty(Neo4JGraphNameConfigurationKey,graphName); // vertex id provider configuration.setProperty(Neo4JVertexIdProviderClassNameConfigurationKey,vertexIdProviderClassName != null ? vertexIdProviderClassName : elementIdProviderClassName); // edge id provider configuration.setProperty(Neo4JEdgeIdProviderClassNameConfigurationKey,edgeIdProviderClassName != null ? edgeIdProviderClassName : elementIdProviderClassName); // property id provider configuration.setProperty(Neo4JPropertyIdProviderClassNameConfigurationKey,propertyIdProviderClassName != null ? propertyIdProviderClassName : elementIdProviderClassName); // return configuration return configuration; }
@Test public void testLocalNodeUsingExt() throws BackendException,InterruptedException { String baseDir = Joiner.on(File.separator).join("target","es","jvmlocal_ext"); assertFalse(new File(baseDir + File.separator + "data").exists()); CommonsConfiguration cc = new CommonsConfiguration(new BaseConfiguration()); cc.set("index." + INDEX_NAME + ".elasticsearch.ext.node.data","true"); cc.set("index." + INDEX_NAME + ".elasticsearch.ext.node.client","false"); cc.set("index." + INDEX_NAME + ".elasticsearch.ext.node.local","true"); cc.set("index." + INDEX_NAME + ".elasticsearch.ext.path.data",baseDir + File.separator + "data"); cc.set("index." + INDEX_NAME + ".elasticsearch.ext.path.work",baseDir + File.separator + "work"); cc.set("index." + INDEX_NAME + ".elasticsearch.ext.path.logs",baseDir + File.separator + "logs"); ModifiableConfiguration config = new ModifiableConfiguration(GraphDatabaseConfiguration.ROOT_NS,cc,BasicConfiguration.Restriction.NONE); config.set(INTERFACE,ElasticSearchSetup.NODE.toString(),INDEX_NAME); Configuration indexConfig = config.restrictTo(INDEX_NAME); IndexProvider idx = new ElasticSearchIndex(indexConfig); simpleWriteAndQuery(idx); idx.close(); assertTrue(new File(baseDir + File.separator + "data").exists()); }
@Test public void testLocalNodeUsingExtAndindexDirectory() throws BackendException,"jvmlocal_ext2"); assertFalse(new File(baseDir + File.separator + "data").exists()); CommonsConfiguration cc = new CommonsConfiguration(new BaseConfiguration()); cc.set("index." + INDEX_NAME + ".elasticsearch.ext.node.data","true"); ModifiableConfiguration config = new ModifiableConfiguration(GraphDatabaseConfiguration.ROOT_NS,INDEX_NAME); config.set(INDEX_DIRECTORY,baseDir,INDEX_NAME); Configuration indexConfig = config.restrictTo(INDEX_NAME); IndexProvider idx = new ElasticSearchIndex(indexConfig); simpleWriteAndQuery(idx); idx.close(); assertTrue(new File(baseDir + File.separator + "data").exists()); }
private static ReadConfiguration getLocalConfiguration(String shortcutOrFile) { File file = new File(shortcutOrFile); if (file.exists()) return getLocalConfiguration(file); else { int pos = shortcutOrFile.indexOf(':'); if (pos<0) pos = shortcutOrFile.length(); String backend = shortcutOrFile.substring(0,pos); Preconditions.checkArgument(StandardStoreManager.getAllManagerClasses().containsKey(backend.toLowerCase()),"Backend shorthand unkNown: %s",backend); String secondArg = null; if (pos+1<shortcutOrFile.length()) secondArg = shortcutOrFile.substring(pos + 1).trim(); BaseConfiguration config = new BaseConfiguration(); ModifiableConfiguration writeConfig = new ModifiableConfiguration(ROOT_NS,new CommonsConfiguration(config),BasicConfiguration.Restriction.NONE); writeConfig.set(STORAGE_BACKEND,backend); ConfigOption option = Backend.getoptionForShorthand(backend); if (option==null) { Preconditions.checkArgument(secondArg==null); } else if (option==STORAGE_DIRECTORY || option==STORAGE_CONF_FILE) { Preconditions.checkArgument(StringUtils.isNotBlank(secondArg),"Need to provide additional argument to initialize storage backend"); writeConfig.set(option,getAbsolutePath(secondArg)); } else if (option==STORAGE_HOSTS) { Preconditions.checkArgument(StringUtils.isNotBlank(secondArg),new String[]{secondArg}); } else throw new IllegalArgumentException("Invalid configuration option for backend "+option); return new CommonsConfiguration(config); } }
/** * Asserts that when a property is requested from the configruation,and it fires an error event (ex. Database is not available),the prevIoUsly stored * values are not cleared. */ @Test public void testAssertGetPropertyErrorReturnPrevIoUsValue() throws Exception { // Get a reloadable property source that loads properties from the configuration every time a property is read. BaseConfiguration configuration = new BaseConfiguration() { @Override public Object getProperty(String key) { fireError(EVENT_READ_PROPERTY,key,null,new IllegalStateException("test exception")); return null; } }; configuration.addProperty(TEST_KEY,TEST_VALUE_1); ReloadablePropertySource reloadablePropertySource = getNewReloadablePropertiesSource(0L,configuration); verifyPropertySourceValue(reloadablePropertySource,TEST_VALUE_1); }
@Before public void setUp() throws Exception { final String clientPort = "21818"; final String dataDirectory = System.getProperty("java.io.tmpdir"); zookeeperHost = "localhost:" + clientPort; ServerConfig config = new ServerConfig(); config.parse(new String[] { clientPort,dataDirectory }); testConfig = new BaseConfiguration(); testConfig.setProperty("quorum",zookeeperHost); testConfig.setProperty("znode","/config"); testConfig.setProperty(APPNAME_PROPERTY,"test"); testConfig.setProperty(ROOTCONfig_PROPERTY,"test"); zkServer = new ZookeeperTestUtil.ZooKeeperThread(config); server = new Thread(zkServer); server.start(); zookeeper = connect(zookeeperHost); }
@Test public void testLoad() throws URISyntaxException,InterruptedException,IOException,ConfigurationException { final String INPUT = "test.yml"; URL testUrl = getClass().getResource("/" + INPUT); final String testYaml = testUrl.toURI().getPath(); FileBasedConfigSource source = new FileBasedConfigSource(); Configuration config = new BaseConfiguration(); config.setProperty(ROOTCONfig_PROPERTY,testYaml); source.configure(config,new HierarchicalConfiguration(),null); HierarchicalConfigurationDeserializer deserializer = new YamlDeserializer(); InputStream is = source.load("test.yml"); ConfigurationResult result = deserializer.deserialize(is); Configuration configuration = result.getConfiguration(); assertthat(configuration.getString("type.unicodeString"),is("€")); }
@Test public void testMultiFileLoad() throws Exception { final String INPUT = "multiple-files/root.yaml"; URL testUrl = getClass().getResource("/" + INPUT); final String testYaml = testUrl.toURI().getPath(); FileBasedConfigSource source = new FileBasedConfigSource(); Configuration config = new BaseConfiguration(); config.setProperty(ROOTCONfig_PROPERTY,testYaml); source.configure(config,null); HierarchicalConfigurationDeserializer deserializer = new YamlDeserializer(); InputStream is = source.load("root.yaml"); ConfigurationResult result = deserializer.deserialize(is); HierarchicalConfiguration configuration = result.getConfiguration(); YamlSerializer serializer = new YamlSerializer(); serializer.serialize(configuration,System.out); // assertthat(configuration.getString("type.unicodeString"),is("€")); }
@Override public Iterator<Vertex> head(final String location,e); } throw new IllegalArgumentException("The provided parserClass must be an " + InputFormat.class.getCanonicalName() + " or an " + InputRDD.class.getCanonicalName() + ": " + readerClass.getCanonicalName()); }
@Test public void basictest() { Configuration configuration = new BaseConfiguration(); configuration.setProperty("a","XXX"); configuration.setProperty("b","YYY"); configuration.setProperty("c",1); Precomputed<String> precomputed = Precomputed.monitorByKeys( configuration,config -> config.getString("a") + "--" + config.getString("b") + "--" + config.getInt("c",0),"a","b"); assertthat(precomputed.get(),is("XXX--YYY--1")); // Not a monitored value,so no update. configuration.setProperty("c",2); assertthat(precomputed.get(),is("XXX--YYY--1")); // Monitored value; update. configuration.setProperty("a","ZZZ"); assertthat(precomputed.get(),is("ZZZ--YYY--2")); // Monitored value; update. configuration.setProperty("b","XXX"); assertthat(precomputed.get(),is("ZZZ--XXX--2")); }
@Test public void shouldPersistToGryo() { final String graphLocation = TestHelper.makeTestDataDirectory(TinkergraphTest.class) + "shouldPersistToGryo.kryo"; final File f = new File(graphLocation); if (f.exists() && f.isFile()) f.delete(); final Configuration conf = new BaseConfiguration(); conf.setProperty(Tinkergraph.GREMLIN_TINKErgraph_GRAPH_FORMAT,false); reloadedGraph.close(); }
@Test public void timestamptest() { Configuration configuration = new BaseConfiguration(); configuration.setProperty("a","AAA"); configuration.setProperty(ConcurrentConfiguration.MODIFICATION_TIMESTAMP,System.nanoTime()); Precomputed<String> precomputed = Precomputed.monitorByUpdate( configuration,config -> config.getString("a") ); assertthat(precomputed.get(),is("AAA")); // Not a monitored value,so no update. configuration.setProperty("a","BBB"); assertthat(precomputed.get(),is("AAA")); // Touch the timestamp so an update will be required. configuration.setProperty(ConcurrentConfiguration.MODIFICATION_TIMESTAMP,System.nanoTime()); assertthat(precomputed.get(),is("BBB")); }
@Test public void shouldPersistToGraphML() { final String graphLocation = TestHelper.makeTestDataDirectory(TinkergraphTest.class) + "shouldPersistToGraphML.xml"; final File f = new File(graphLocation); if (f.exists() && f.isFile()) f.delete(); final Configuration conf = new BaseConfiguration(); conf.setProperty(Tinkergraph.GREMLIN_TINKErgraph_GRAPH_FORMAT,true); reloadedGraph.close(); }
protected Configuration getConfiguration(boolean create,boolean open,boolean transactional) { if (configuration != null) return configuration; else return new BaseConfiguration() { { setProperty(Graph.GRAPH,OrientGraph.class.getName()); setProperty(OrientGraph.CONfig_URL,url); setProperty(OrientGraph.CONfig_USER,user); setProperty(OrientGraph.CONfig_PASS,password); setProperty(OrientGraph.CONfig_CREATE,create); setProperty(OrientGraph.CONfig_OPEN,open); setProperty(OrientGraph.CONfig_TRANSACTIONAL,transactional); setProperty(OrientGraph.CONfig_LABEL_AS_CLASSNAME,labelAsClassName); } }; }
@Test public void indexCollation() { OrientGraph graph = newGraph(); String label = "VC1"; String key = "name"; String value = "bob"; Configuration config = new BaseConfiguration(); config.setProperty("type","UNIQUE"); config.setProperty("keytype",OType.STRING); config.setProperty("collate","ci"); graph.createVertexIndex(key,label,config); graph.addVertex(label,value); // Todo: test with a "has" traversal,if/when that supports a case insensitive match predicate // OrientIndexQuery indexRef = new OrientIndexQuery(true,Optional.of(label),value.toupperCase()); // Iterator<OrientVertex> result = graph.getIndexedVertices(indexRef).iterator(); // Assert.assertEquals(result.hasNext(),true); }
@Override public Tinkergraph deserialize(final JsonParser jsonParser,Edge.class); e.attach(Attachable.Method.getorCreate(graph)); } } } } return graph; }
@Test public void shouldPersistToGryoAndHandleMultiProperties() { final String graphLocation = TestHelper.makeTestDataDirectory(TinkergraphTest.class) + "shouldPersistToGryoMulti.kryo"; final File f = new File(graphLocation); if (f.exists() && f.isFile()) f.delete(); final Configuration conf = new BaseConfiguration(); conf.setProperty(Tinkergraph.GREMLIN_TINKErgraph_GRAPH_FORMAT,false); reloadedGraph.close(); }
@Test public void testFind() throws Exception { String host = System.getProperty("Z3950CatalogTest.host"); String port = System.getProperty("Z3950CatalogTest.port"); String base = System.getProperty("Z3950CatalogTest.base"); String recordCharset = System.getProperty("Z3950CatalogTest.recordCharset"); Assume.assumeNotNull(host,port,base); String fieldName = "sys"; String value = "001704913"; Locale locale = null; final String catalogId = "catalogId"; CatalogConfiguration c = new CatalogConfiguration(catalogId,"",new BaseConfiguration() {{ addProperty(CatalogConfiguration.PROPERTY_FIELDS,"sys"); addProperty(CatalogConfiguration.FIELD_PREFIX + '.' + "sys" + '.' + Z3950Catalog.PROPERTY_FIELD_QUERY,"@attrset bib-1 @attr 1=12 @attr 4=1 \"%s\""); }}); Z3950Catalog instance = new Z3950Catalog(host,Integer.parseInt(port),base,recordCharset == null ? null : Charset.forName(recordCharset),Z3950Catalog.readFields(c) ); List<MetadataItem> result = instance.find(fieldName,value,locale); assertFalse(result.isEmpty()); }
@Override public <K,e); } throw new IllegalArgumentException("The provided parserClass must be an " + InputFormat.class.getCanonicalName() + " or an " + InputRDD.class.getCanonicalName() + ": " + readerClass.getCanonicalName()); }
@Before public void setUp() { conf = new BaseConfiguration(); conf.setProperty(DesaServices.PROPERTY_DESASERVICES,"ds1,dsNulls"); String prefix = DesaServices.PREFIX_DESA + '.' + "ds1" + '.'; conf.setProperty(prefix + DesaConfiguration.PROPERTY_USER,"ds1user"); conf.setProperty(prefix + DesaConfiguration.PROPERTY_PASSWD,"ds1passwd"); conf.setProperty(prefix + DesaConfiguration.PROPERTY_PRODUCER,"ds1producer"); conf.setProperty(prefix + DesaConfiguration.PROPERTY_OPERATOR,"ds1operator"); conf.setProperty(prefix + DesaConfiguration.PROPERTY_EXPORTMODELS,"model:id1,model:id2"); conf.setProperty(prefix + DesaConfiguration.PROPERTY_RESTAPI,"https://SERVER/dea-frontend/rest/sipsubmission"); conf.setProperty(prefix + DesaConfiguration.PROPERTY_WEBSERVICE,"https://SERVER/dea-frontend/ws/SIPSubmissionService"); conf.setProperty(prefix + DesaConfiguration.PROPERTY_NOMENCLATUREACRONYMS,"acr1,acr2"); prefix = DesaServices.PREFIX_DESA + '.' + "dsNulls" + '.'; conf.setProperty(prefix + DesaConfiguration.PROPERTY_USER,null); conf.setProperty(prefix + DesaConfiguration.PROPERTY_PASSWD,""); conf.setProperty(prefix + DesaConfiguration.PROPERTY_EXPORTMODELS,null); conf.setProperty(prefix + DesaConfiguration.PROPERTY_NOMENCLATUREACRONYMS,null); prefix = DesaServices.PREFIX_DESA + '.' + "dsNotActive" + '.'; conf.setProperty(prefix + DesaConfiguration.PROPERTY_USER,"NA"); desaServices = new DesaServices(conf); }
今天关于configurationSELinux访问,以便Apache可以访问安装的目录和http访问linux文件的介绍到此结束,谢谢您的阅读,有关org.apache.commons.configuration.AbstractConfiguration的实例源码、org.apache.commons.configuration.AbstractFileConfiguration的实例源码、org.apache.commons.configuration.AbstractHierarchicalFileConfiguration的实例源码、org.apache.commons.configuration.BaseConfiguration的实例源码等更多相关知识的信息可以在本站进行查询。
本文标签: