在这篇文章中,我们将带领您了解在AmazonEC2Linux上启动spoon.sh时遇到麻烦的全貌,包括echoamazon使用的相关情况。同时,我们还将为您介绍有关AmazonAWSEC2Linux
在这篇文章中,我们将带领您了解在Amazon EC2 Linux上启动spoon.sh时遇到麻烦的全貌,包括echo amazon使用的相关情况。同时,我们还将为您介绍有关Amazon AWS EC2 Linux上的Node-gyp Connect-mongo、amazon-ec2 – 将rails 3.1应用程序部署到Amazon Ec2、amazon-web-services – 如何在amazon linux disto上安装nginx 1.9.15、com.amazonaws.services.ec2.AmazonEC2AsyncClient的实例源码的知识,以帮助您更好地理解这个主题。
本文目录一览:- 在Amazon EC2 Linux上启动spoon.sh时遇到麻烦(echo amazon使用)
- Amazon AWS EC2 Linux上的Node-gyp Connect-mongo
- amazon-ec2 – 将rails 3.1应用程序部署到Amazon Ec2
- amazon-web-services – 如何在amazon linux disto上安装nginx 1.9.15
- com.amazonaws.services.ec2.AmazonEC2AsyncClient的实例源码
在Amazon EC2 Linux上启动spoon.sh时遇到麻烦(echo amazon使用)
我不熟悉Linux和Amazon EC2。
我通过以下两个链接配置JAVA_HOME
如何知道JAVA_HOME_Variable
bash_profile
所以我bash_profile中的当前路径是
export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.51.x86_64export PATH=$PATH:/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.51.x86_64/bin
现在我正在尝试启动./spoon.sh,它给我错误
Caused by: java.lang.UnsatisfiedLinkError: Could not load SWT library. Reasons: no swt-pi-gtk-3740 in java.library.path no swt-pi-gtk in java.library.path /root/.swt/lib/linux/x86_64/libswt-pi-gtk-3740.so: libgtk-x11-2.0.so.0: cannot open shared object file: No such file or directory Can''t load library: /root/.swt/lib/linux/x86_64/libswt-pi-gtk.so at org.eclipse.swt.internal.Library.loadLibrary(Unknown Source) at org.eclipse.swt.internal.Library.loadLibrary(Unknown Source) at org.eclipse.swt.internal.gtk.OS.<clinit>(Unknown Source) at org.eclipse.swt.internal.Converter.wcsToMbcs(Unknown Source) at org.eclipse.swt.internal.Converter.wcsToMbcs(Unknown Source) at org.eclipse.swt.widgets.Display.<clinit>(Unknown Source) at org.pentaho.di.ui.spoon.Spoon.main(Spoon.java:540) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:622) at org.pentaho.commons.launcher.Launcher.main(Launcher.java:134)
那么有人可以建议出什么问题吗?
答案1
小编典典Pentaho不支持openjdk Java版本。 安装Oracle / Sun Java。
但是无论如何…您是否正在尝试在Ec2实例上运行水壶环境?没有X屏幕?如果只想运行水壶工作或转换,则必须使用 kitchen.sh或pan.sh
,而不是spoon.sh。Spoon.sh仅用于使用GUI界面创建事务或作业。
Amazon AWS EC2 Linux上的Node-gyp Connect-mongo
我不能让connect-mongo安装在我的EC2实例上。 与node-gyp不能访问“/root/.node-gyp/0.10.40”有关 – 这是因为安装的Node版本是4.2.1。
文件夹“/root/.node-gyp/”甚至不会退出系统。 那么为什么node-gyp在那里?
在configuration服务器时,我首先安装了节点0.10.40,然后升级到了4.2.1。
我们如何将node-gyp指向正确的目录? 还是其他地方的问题?
htaccess在localhost中工作,但在EC2实例中不起作用
用户号码和组号在Linux中不匹配
替代平
Django图片上传在本地环境中运行,但不在运行Nginx的Amazon EC2上运行
无法使用winrm进行引导
从本地主机上成功安装logging:
$ node --version v4.2.1 $ npm --version 2.14.7 $ npm install connect-mongo --save | > kerberos@0.0.16 install /Users/username/Sites/adserver/node_modules/connect-mongo/node_modules/mongodb/node_modules/mongodb-core/node_modules/kerberos > node-gyp rebuild CXX(target) Release/obj.target/kerberos/lib/kerberos.o CXX(target) Release/obj.target/kerberos/lib/worker.o CC(target) Release/obj.target/kerberos/lib/kerberosgss.o ../lib/kerberosgss.c:509:13: warning: implicit declaration of function ''gss_acquire_cred_impersonate_name'' is invalid in C99 [-Wimplicit-function-declaration] maj_stat = gss_acquire_cred_impersonate_name(&min_stat,^ 1 warning generated. CC(target) Release/obj.target/kerberos/lib/base64.o CXX(target) Release/obj.target/kerberos/lib/kerberos_context.o SOLINK_MODULE(target) Release/kerberos.node connect-mongo@0.8.2 node_modules/connect-mongo ├── depd@1.1.0 ├── debug@2.2.0 (ms@0.7.1) └── mongodb@2.0.47 (es6-promise@2.1.1,readable-stream@1.0.31,mongodb-core@1.2.20)
从EC2 Linux上的失败安装logging:
$ node --version v4.2.1 $ npm --version 2.14.7 $ sudo npm install connect-mongo > kerberos@0.0.16 install /home/ec2-user/apps/adserver/node_modules/connect-mongo/node_modules/mongodb/node_modules/mongodb-core/node_modules/kerberos > node-gyp rebuild gyp WARN EACCES user "root" does not have permission to access the dev dir "/root/.node-gyp/0.10.40" gyp WARN EACCES attempting to reinstall using temporary dev dir "/home/ec2-user/apps/adserver/node_modules/connect-mongo/node_modules/mongodb/node_modules/mongodb-core/node_modules/kerberos/.node-gyp" make: Entering directory `/home/ec2-user/apps/adserver/node_modules/connect-mongo/node_modules/mongodb/node_modules/mongodb-core/node_modules/kerberos/build'' CXX(target) Release/obj.target/kerberos/lib/kerberos.o In file included from ../lib/kerberos.cc:1:0: ../lib/kerberos.h:5:27: Fatal error: gssapi/gssapi.h: No such file or directory #include <gssapi/gssapi.h> ^ compilation terminated. make: *** [Release/obj.target/kerberos/lib/kerberos.o] Error 1 make: Leaving directory `/home/ec2-user/apps/adserver/node_modules/connect-mongo/node_modules/mongodb/node_modules/mongodb-core/node_modules/kerberos/build'' gyp ERR! build error gyp ERR! stack Error: `make` Failed with exit code: 2 gyp ERR! stack at ChildProcess.onExit (/usr/lib/node_modules/npm/node_modules/node-gyp/lib/build.js:267:23) gyp ERR! stack at ChildProcess.emit (events.js:98:17) gyp ERR! stack at Process.ChildProcess._handle.onexit (child_process.js:820:12) gyp ERR! System Linux 4.1.7-15.23.amzn1.x86_64 gyp ERR! command "node" "/usr/lib/node_modules/npm/node_modules/node-gyp/bin/node-gyp.js" "rebuild" gyp ERR! cwd /home/ec2-user/apps/adserver/node_modules/connect-mongo/node_modules/mongodb/node_modules/mongodb-core/node_modules/kerberos gyp ERR! node -v v0.10.40 gyp ERR! node-gyp -v v1.0.1 gyp ERR! not ok npm WARN optional dep Failed,continuing kerberos@0.0.16 connect-mongo@0.8.2 node_modules/connect-mongo ├── depd@1.1.0 ├── debug@2.2.0 (ms@0.7.1) └── mongodb@2.0.47 (readable-stream@1.0.31,es6-promise@2.1.1,mongodb-core@1.2.20) [ec2-user@ip-172-31-9-139 adserver]$
云平台 – sudo:无法parsing主机
AWS ELB无法解释的延迟
AWS ELB – >具有自签名证书的通过HTTPS的后端服务器
错误:从上游读取响应头时上游过早closures连接
如何configuration我的nodejs应用程序只响应SSL连接?
问题不gyp WARN EACCESS ...信息。 这只是一个警告,npm可以继续一个解决方法。 但要摆脱这个警告消息,请参阅此链接 。
问题在于:
./lib/kerberos.h:5:27: Fatal error: gssapi/gssapi.h: No such file or directory #include <gssapi/gssapi.h>
我发现这个链接和解决方案(在Ubuntu / Debian上)似乎是:
sudo apt-get install libkrb5-dev
总结
以上是小编为你收集整理的Amazon AWS EC2 Linux上的Node-gyp Connect-mongo全部内容。
如果觉得小编网站内容还不错,欢迎将小编网站推荐给好友。
amazon-ec2 – 将rails 3.1应用程序部署到Amazon Ec2
我已经google了很多,但仍然找不到将我的网站部署到ec2的方法.
谁能向我解释更多关于ec2的内容?我正在使用Ubuntu11.04进行开发.
我想用乘客Nginx进行部署,谢谢
解决方法:
我正在使用capistrano将Rails 3.1应用程序部署到EC2微实例.我还使用rvm在我的EC2上设置了Ruby.我一直在使用薄,但本周末我转而使用Unicorn进行测试.我将分享我正在做的事情,也许你可以弄清楚如何相应地改变它以使用Passenger(或其他人可以对此发表评论).如果人们对此有任何建议,我也欢迎任何评论,因为我不是专家.
amazon-web-services – 如何在amazon linux disto上安装nginx 1.9.15
我尝试在新的亚马逊linux上安装最新版本的Nginx(> = 1.9.5)来使用http2.我按照这里描述的说明 – > http://nginx.org/en/linux_packages.html
我创建了一个repo文件/etc/yum.repos.d/Nginx.repowith这个内容:
[Nginx]
name=Nginx repo
baseurl=http://Nginx.org/packages/mainline/centos/7/$basearch/
gpgcheck=0
enabled=1
如果我运行yum update和yum install Nginx,我会得到:
Nginx x86_64 1:1.8.1-1.26.amzn1 amzn-main 557 k
看起来它仍然来自amzn-main回购.如何安装较新版本的Nginx?
– 编辑 –
我在Nginx.repo文件中添加了“priority = 10”,现在我可以使用yum install Nginx安装1.9.15,结果如下:
Loaded plugins: priorities,update-motd,upgrade-helper
Resolving Dependencies
--> Running transaction check
---> Package Nginx.x86_64 1:1.9.15-1.el7.ngx will be installed
--> Processing Dependency: systemd for package: 1:Nginx-1.9.15-1.el7.ngx.x86_64
--> Processing Dependency: libpcre.so.1()(64bit) for package: 1:Nginx-1.9.15-1.el7.ngx.x86_64
--> Finished Dependency Resolution
Error: Package: 1:Nginx-1.9.15-1.el7.ngx.x86_64 (Nginx)
Requires: libpcre.so.1()(64bit)
Error: Package: 1:Nginx-1.9.15-1.el7.ngx.x86_64 (Nginx)
Requires: systemd
You Could try using --skip-broken to work around the problem
You Could try running: rpm -Va --nofiles --nodigest
com.amazonaws.services.ec2.AmazonEC2AsyncClient的实例源码
public void createSnapshotFromTagName(TagNameRequest tagNameRequest,Context context) { LambdaLogger logger = context.getLogger(); logger.log("create ebs snapshot from tag name Start. backup target[" + tagNameRequest + "]"); String regionName = System.getenv("AWS_DEFAULT_REGION"); AmazonEC2Async client = RegionUtils.getRegion(regionName).createClient(AmazonEC2Asyncclient.class,new DefaultAWSCredentialsProviderChain(),cc); try { List<Volume> volumes = describeBackupVolumes(client,tagNameRequest); for (Volume volume : volumes) { createSnapshot(volume.getVolumeId(),tagNameRequest.getGenerationCount(),context); } } finally { client.shutdown(); } }
private void startInstance(AmazonEC2Asyncclient client,DefaultAdapterContext c) { StartInstancesRequest startRequest = new StartInstancesRequest(); startRequest.withInstanceIds(c.child.id); client.startInstancesAsync(startRequest,new AWSAsyncHandler<StartInstancesRequest,StartInstancesResult>() { @Override protected void handleError(Exception e) { c.taskManager.patchTaskToFailure(e); } @Override protected void handleSuccess(StartInstancesRequest request,StartInstancesResult result) { AWSUtils.waitForTransitionCompletion(getHost(),result.getStartingInstances(),"running",client,(is,e) -> { if (e == null) { c.taskManager.finishTask(); } else { c.taskManager.patchTaskToFailure(e); } }); } }); }
@Override public void handlePatch(Operation op) { if (!op.hasBody()) { op.fail(new IllegalArgumentException("body is required")); return; } ComputePowerRequest pr = op.getBody(ComputePowerRequest.class); op.complete(); if (pr.isMockRequest) { updateComputeState(pr,new DefaultAdapterContext(this,pr)); } else { new DefaultAdapterContext(this,pr) .populateBaseContext(BaseAdapterStage.VMDESC) .whenComplete((c,e) -> { AmazonEC2Asyncclient client = this.clientManager.getorCreateEC2Client( c.parentAuth,c.child.description.regionId,this,(t) -> c.taskManager.patchTaskToFailure(t)); if (client == null) { return; } applyPowerOperation(client,pr,c); }); } }
private void applyPowerOperation(AmazonEC2Asyncclient client,ComputePowerRequest pr,DefaultAdapterContext c) { switch (pr.powerState) { case OFF: powerOff(client,c); break; case ON: powerOn(client,c); break; case SUSPEND: // Todo: Not supported yet,so simply patch the state with requested power state. updateComputeState(pr,c); break; case UNKNowN: default: c.taskManager.patchTaskToFailure( new IllegalArgumentException("Unsupported power state transition requested.")); } }
private DeferredResult<Void> validateCredentials( AuthCredentialsServiceState credentials,String regionId) { AmazonEC2Asyncclient client = AWSUtils.getAsyncclient(credentials,regionId,this.clientManager.getExecutor()); AWSDeferredResultAsyncHandler<DescribeAvailabilityZonesRequest,DescribeAvailabilityZonesResult> asyncHandler = new AWSDeferredResultAsyncHandler<>( this,"Validate Credentials"); client.describeAvailabilityZonesAsync(asyncHandler); return asyncHandler .toDeferredResult() .handle((describeAvailabilityZonesResult,e) -> { if (e instanceof AmazonServiceException) { AmazonServiceException ase = (AmazonServiceException) e; if (ase.getStatusCode() == STATUS_CODE_UNAUTHORIZED) { throw new LocalizableValidationException( e,PHOTON_MODEL_ADAPTER_UNAUTHORIZED_MESSAGE,PHOTON_MODEL_ADAPTER_UNAUTHORIZED_MESSAGE_CODE); } } return null; }); }
/** * Accesses the client cache to get the EC2 client for the given auth credentials and regionId. If a client * is not found to exist,creates a new one and adds an entry in the cache for it. * @param credentials * The auth credentials to be used for the client creation * @param regionId * The region of the AWS client * @param service * The stateless service making the request and for which the executor pool needs to be allocated. * @return The AWSClient */ public AmazonEC2Asyncclient getorCreateEC2Client( AuthCredentialsServiceState credentials,String regionId,StatelessService service,Consumer<Throwable> failConsumer) { if (this.awsClientType != AwsClientType.EC2) { throw new UnsupportedOperationException( "This client manager supports only AWS " + this.awsClientType + " clients."); } AmazonEC2Asyncclient amazonEC2Client = null; String cacheKey = createCredentialRegionCacheKey(credentials,regionId); try { amazonEC2Client = this.ec2ClientCache.computeIfAbsent(cacheKey,key -> AWSUtils .getAsyncclient(credentials,getExecutor())); } catch (Throwable e) { service.logSevere(e); failConsumer.accept(e); } return amazonEC2Client; }
public static void tearDownTestVpc( AmazonEC2Asyncclient client,VerificationHost host,Map<String,Object> awsTestContext,boolean isMock) { if (!isMock && !vpcIdExists(client,AWS_DEFAULT_VPC_ID)) { final String vpcId = (String) awsTestContext.get(VPC_KEY); final String subnetId = (String) awsTestContext.get(subnet_KEY); final String internetGatewayId = (String) awsTestContext.get(INTERNET_GATEWAY_KEY); final String securityGroupId = (String) awsTestContext.get(Security_GROUP_KEY); // clean up VPC and all its dependencies if creating one at setUp deleteSecurityGroupUsingEC2Client(client,host,securityGroupId); SecurityGroup securityGroup = new AWSSecurityGroupClient(client) .getSecurityGroup(AWS_DEFAULT_GROUP_NAME,vpcId); if (securityGroup != null) { deleteSecurityGroupUsingEC2Client(client,securityGroup.getGroupId()); } deletesubnet(client,subnetId); detachInternetGateway(client,vpcId,internetGatewayId); deleteInternetGateway(client,internetGatewayId); deleteVPC(client,vpcId); } }
public static void tearDownTestdisk( AmazonEC2Asyncclient client,boolean isMock) { if (awsTestContext.containsKey(disK_KEY)) { String volumeId = awsTestContext.get(disK_KEY).toString(); if (!isMock) { deleteVolume(client,volumeId); } awsTestContext.remove(disK_KEY); } if (awsTestContext.containsKey(SNAPSHOT_KEY)) { String snapshotId = awsTestContext.get(SNAPSHOT_KEY).toString(); if (!isMock) { deleteSnapshot(client,snapshotId); } awsTestContext.remove(SNAPSHOT_KEY); } }
/** * Method to directly provision instances on the AWS endpoint without the kNowledge of the local * system. This is used to spawn instances and to test that the discovery of items not * provisioned by Xenon happens correctly. * * @throws Throwable */ public static List<String> provisionAWSVMWithEC2Client(AmazonEC2Asyncclient client,int numberOfInstance,String instanceType,String subnetId,String securityGroupId) throws Throwable { host.log("Provisioning %d instances on the AWS endpoint using the EC2 client.",numberOfInstance); RunInstancesRequest runInstancesRequest = new RunInstancesRequest() .withsubnetId(subnetId) .withImageId(EC2_LINUX_AMI).withInstanceType(instanceType) .withMinCount(numberOfInstance).withMaxCount(numberOfInstance) .withSecurityGroupIds(securityGroupId); // handler invoked once the EC2 runInstancesAsync commands completes AWSRunInstancesAsyncHandler creationHandler = new AWSRunInstancesAsyncHandler( host); client.runInstancesAsync(runInstancesRequest,creationHandler); host.waitFor("Waiting for instanceIds to be retured from AWS",() -> { return checkInstanceIdsReturnedFromAWS(numberOfInstance,creationHandler.instanceIds); }); return creationHandler.instanceIds; }
/** * Method to get Instance details directly from Amazon * * @throws Throwable */ public static List<Instance> getAwsInstancesByIds(AmazonEC2Asyncclient client,List<String> instanceIds) throws Throwable { host.log("Getting instances with ids " + instanceIds + " from the AWS endpoint using the EC2 client."); DescribeInstancesRequest describeInstancesRequest = new DescribeInstancesRequest() .withInstanceIds(instanceIds); DescribeInstancesResult describeInstancesResult = client .describeInstances(describeInstancesRequest); return describeInstancesResult.getReservations().stream() .flatMap(r -> r.getInstances().stream()).collect(Collectors.toList()); }
/** * Method that polls to see if the instances provisioned have turned ON.This method accepts an * error count to allow some room for errors in case all the requested resources are not * provisioned correctly. * * @return boolean if the required instances have been turned ON on AWS with some acceptable * error rate. */ public static boolean computeInstancesstartedStateWithAcceptedErrorRate( AmazonEC2Asyncclient client,List<String> instanceIds,int errorRate) throws Throwable { // If there are no instanceIds set then return false if (instanceIds.size() == 0) { return false; } ArrayList<Boolean> provisioningFlags = new ArrayList<Boolean>(instanceIds.size()); for (int i = 0; i < instanceIds.size(); i++) { provisioningFlags.add(i,Boolean.FALSE); } // Calls the describe instances API to get the latest state of each machine being // provisioned. checkInstancesstarted(host,instanceIds,provisioningFlags); int totalCount = instanceIds.size(); int passCount = (int) Math.ceil((((100 - errorRate) / HUNDERED) * totalCount)); int poweredOnCount = 0; for (boolean startedFlag : provisioningFlags) { if (startedFlag) { poweredOnCount++; } } return (poweredOnCount >= passCount); }
/** * Gets the instance count of non-terminated instances on the AWS endpoint. This is used to run * the asserts and validate the results for the data that is collected during enumeration.This * also calculates the compute descriptions that will be used to represent the instances that * were discovered on the AWS endpoint. Further factoring in the * * @throws Throwable */ public static BaseLinestate getBaseLineInstanceCount(VerificationHost host,AmazonEC2Asyncclient client,List<String> testComputeDescriptions) throws Throwable { BaseLinestate baseLinestate = new BaseLinestate(); AWSEnumerationAsyncHandler enumerationHandler = new AWSEnumerationAsyncHandler(host,AWSEnumerationAsyncHandler.MODE.GET_COUNT,null,testComputeDescriptions,baseLinestate); DescribeInstancesRequest request = new DescribeInstancesRequest(); Filter runningInstanceFilter = getAWSNonTerminatedInstancesFilter(); request.getFilters().add(runningInstanceFilter); client.describeInstancesAsync(request,enumerationHandler); host.waitFor("Error waiting to get base line instance count from AWS in test ",() -> { return baseLinestate.isCountPopulated; }); return baseLinestate; }
public static void waitForInstancesToBeTerminated(AmazonEC2Asyncclient client,List<String> instanceIdsToDelete) throws Throwable { if (instanceIdsToDelete.size() == 0) { return; } ArrayList<Boolean> deletionFlags = new ArrayList<>(instanceIdsToDelete.size()); for (int i = 0; i < instanceIdsToDelete.size(); i++) { deletionFlags.add(i,Boolean.FALSE); } host.waitFor("Error waiting for EC2 client delete instances in test ",() -> { boolean isDeleted = computeInstancesTerminationState(client,instanceIdsToDelete,deletionFlags); if (isDeleted) { return true; } host.log(Level.INFO,"Waiting for EC2 instance deletion"); Thread.sleep(TimeUnit.SECONDS.toMillis(10)); return false; }); }
/** * Checks if a newly deleted instance has its status set to terminated. * * @return */ public static void checkInstancesDeleted(AmazonEC2Asyncclient client,List<String> instanceIdsToDelete,ArrayList<Boolean> deletionFlags) throws Throwable { AWSEnumerationAsyncHandler enumerationHandler = new AWSEnumerationAsyncHandler(host,AWSEnumerationAsyncHandler.MODE.CHECK_TERMINATION,deletionFlags,null); DescribeInstancesRequest request = new DescribeInstancesRequest() .withInstanceIds(instanceIdsToDelete); client.describeInstancesAsync(request,enumerationHandler); // Waiting to get a response from AWS before the state computation is done for the list of // VMs. host.waitFor("Waiting to get response from AWS ",() -> { return enumerationHandler.responseReceived; }); }
/** * Method to get disk details directly from Amazon */ public static List<Volume> getAwsdisksByIds(AmazonEC2Asyncclient client,List<String> diskIds) throws Throwable { try { host.log("Getting disks with ids " + diskIds + " from the AWS endpoint using the EC2 client."); DescribeVolumesRequest describeVolumesRequest = new DescribeVolumesRequest() .withVolumeIds(diskIds); DescribeVolumesResult describeVolumesResult = client .describeVolumes(describeVolumesRequest); return describeVolumesResult.getVolumes(); } catch (Exception e) { if (e instanceof AmazonEC2Exception && ((AmazonEC2Exception) e).getErrorCode() .equalsIgnoreCase(AWS_INVALID_VOLUME_ID_ERROR_CODE)) { return null; } } return new ArrayList<>(); }
@Test public void testResourceNaming() throws Throwable { boolean tagFound = false; AmazonEC2Asyncclient client = TestUtils.getClient(this.privateKeyId,this.privateKey,this.region,false); //create something to name AWSNetworkClient svc = new AWSNetworkClient(client); String vpcID = svc.createVPC("10.20.0.0/16"); AWSUtils.tagResourcesWithName(client,TEST_NAME,vpcID); List<TagDescription> tags = AWSUtils.getResourceTags(vpcID,client); for (TagDescription tagDesc:tags) { if (tagDesc.getKey().equalsIgnoreCase(AWS_TAG_NAME)) { assertTrue(tagDesc.getValue().equalsIgnoreCase(TEST_NAME)); tagFound = true; break; } } // ensure we found the tag assertTrue(tagFound); svc.deleteVPC(vpcID); }
protected void assertBootdiskConfiguration(AmazonEC2Asyncclient client,Instance awsInstance,String diskLink) { diskState diskState = getdiskState(diskLink); Volume bootVolume = getVolume(client,awsInstance,awsInstance.getRootDeviceName()); assertEquals("Boot disk capacity in diskstate is not matching the boot disk size of the " + "vm launched in aws",diskState.capacityMBytes,bootVolume.getSize() * 1024); assertEquals( "Boot disk type in diskstate is not same as the type of the volume attached to the VM",diskState.customProperties.get("volumeType"),bootVolume.getVolumeType()); assertEquals( "Boot disk iops in diskstate is the same as the iops of the volume attached to the VM",Integer.parseInt(diskState.customProperties.get("iops")),bootVolume.getIops().intValue()); assertEquals("Boot disk attach status is not matching",diskService.diskStatus.ATTACHED,diskState.status); }
protected Volume getVolume(AmazonEC2Asyncclient client,String deviceName) { InstanceBlockDeviceMapping bootdiskMapping = awsInstance.getBlockDeviceMappings().stream() .filter(blockDeviceMapping -> blockDeviceMapping.getDeviceName().equals(deviceName)) .findAny() .orElse(null); //The ami used in this test is an ebs-backed AMI assertNotNull("Device type should be ebs type",bootdiskMapping.getEbs()); String bootVolumeId = bootdiskMapping.getEbs().getVolumeId(); DescribeVolumesRequest describeVolumesRequest = new DescribeVolumesRequest() .withVolumeIds(bootVolumeId); DescribeVolumesResult describeVolumesResult = client .describeVolumes(describeVolumesRequest); return describeVolumesResult.getVolumes().get(0); }
public EC2(UserProviderCredentials credentials) { this.credentials_ = checkNotNull(credentials); checkState(!isNullOrEmpty(credentials.getLoginCredentials().getCredentialName())); checkNotNull(credentials.getRegion()); checkState(!isNullOrEmpty(credentials.getRegion().getName())); checkState(!isNullOrEmpty(credentials.getRegion().getEndpoint())); this.awsCredentials_ = new BasicAWSCredentials(credentials.getLoginCredentials().getIdentity(),credentials.getLoginCredentials().getCredential()); ec2_ = new AmazonEC2Asyncclient(this.awsCredentials_); ec2_.setEndpoint(credentials.getRegion().getEndpoint()); this.defaultUserGroupName_ = System.getProperty("org.excalibur.security.default.group.name","excalibur-security-group"); backoffLimitedRetryHandler_ = new BackoffLimitedRetryHandler(); }
@Override public void handleRequest(InputStream is,OutputStream os,Context context) { LambdaLogger logger = context.getLogger(); String regionName = System.getenv("AWS_DEFAULT_REGION"); AmazonEC2Async client = RegionUtils.getRegion(regionName).createClient(AmazonEC2Asyncclient.class,cc); try { ObjectMapper om = new ObjectMapper(); DeregisterImageRequest event = om.readValue(is,DeregisterImageRequest.class); String imageId = event.getDetail().getRequestParameters().getimageId(); logger.log("Deregister AMI parge snapshot Start. ImageId[" + imageId + "]"); List<Snapshot> snapshots = describeSnapshot(client,imageId,context); if (snapshots.size() == 0) { logger.log("Target of snapshot there is nothing."); } else { snapshots.stream().forEach(s -> pargeSnapshot(client,s.getSnapshotId(),context)); } logger.log("[SUCCESS][DeregisterImage]has been completed successfully." + imageId); } catch (Exception e) { logger.log("[ERROR][DeregisterImage]An unexpected error has occurred. message[" + e.getMessage() + "]"); } finally { client.shutdown(); } }
private AWSTaskStatusChecker(AmazonEC2Asyncclient amazonEC2Client,String instanceId,String desiredState,Consumer<Object> consumer,TaskManager taskManager,long expirationTimeMicros) { this(amazonEC2Client,instanceId,desiredState,Collections.emptyList(),consumer,taskManager,service,expirationTimeMicros); }
private AWSTaskStatusChecker(AmazonEC2Asyncclient amazonEC2Client,List<String> failureStates,long expirationTimeMicros) { this.instanceId = instanceId; this.amazonEC2Client = amazonEC2Client; this.consumer = consumer; this.desiredState = desiredState; this.failureStates = failureStates; this.taskManager = taskManager; this.service = service; this.expirationTimeMicros = expirationTimeMicros; }
public static AWSTaskStatusChecker create( AmazonEC2Asyncclient amazonEC2Client,long expirationTimeMicros) { return new AWSTaskStatusChecker(amazonEC2Client,expirationTimeMicros); }
public static AWSTaskStatusChecker create( AmazonEC2Asyncclient amazonEC2Client,failureStates,expirationTimeMicros); }
private void reset(AmazonEC2Asyncclient client,ResourceOperationRequest pr,DefaultAdapterContext c) { if (!c.child.powerState.equals(ComputeService.PowerState.ON)) { logWarning(() -> String.format("Cannot perform a reset on this EC2 instance. " + "The machine should be in powered on state")); c.taskManager.patchTaskToFailure(new IllegalStateException("Incorrect power state. Expected the machine " + "to be powered on ")); return; } // The stop action for reset is a force stop. So we use the withForce method to set the force parameter to TRUE // This is similar to unplugging the machine from the power circuit. // The OS and the applications are forcefully stopped. StopInstancesRequest stopRequest = new StopInstancesRequest(); stopRequest.withInstanceIds(c.child.id).withForce(Boolean.TRUE); client.stopInstancesAsync(stopRequest,new AWSAsyncHandler<StopInstancesRequest,StopInstancesResult>() { @Override protected void handleError(Exception e) { c.taskManager.patchTaskToFailure(e); } @Override protected void handleSuccess(StopInstancesRequest request,StopInstancesResult result) { AWSUtils.waitForTransitionCompletion(getHost(),result.getStoppingInstances(),"stopped",e) -> { if (e != null) { onError(e); return; } //Instances will be started only if they're successfully stopped startInstance(client,c); }); } }); }
@Override public void handlePatch(Operation op) { if (!op.hasBody()) { op.fail(new IllegalArgumentException("body is required")); return; } ResourceOperationRequest request = op.getBody(ResourceOperationRequest.class); op.complete(); logInfo(() -> String.format("Handle operation %s for compute %s.",request.operation,request.resourceLink())); if (request.isMockRequest) { updateComputeState(new DefaultAdapterContext(this,request)); } else { new DefaultAdapterContext(this,request) .populateBaseContext(BaseAdapterStage.VMDESC) .whenComplete((c,(t) -> c.taskManager.patchTaskToFailure(t)); if (client != null) { reset(client,request,c); } // if the client is found to be null,it implies the task is already patched to // failure in the catch block of getorCreateEC2Client method (failConsumer.accept()). // So it is not required to patch it again. }); } }
public static AmazonEC2Asyncclient getAsyncclient( AuthCredentialsServiceState credentials,String region,ExecutorService executorService) { ClientConfiguration configuration = new ClientConfiguration(); configuration.setMaxConnections(100); configuration.withRetryPolicy(new RetryPolicy(new CustomretryCondition(),DEFAULT_BACKOFF_STRATEGY,DEFAULT_MAX_ERROR_RETRY,false)); AWsstaticCredentialsProvider awsstaticCredentialsProvider = new AWsstaticCredentialsProvider( new BasicAWSCredentials(credentials.privateKeyId,EncryptionUtils.decrypt(credentials.privateKey))); AmazonEC2AsyncclientBuilder ec2AsyncclientBuilder = AmazonEC2AsyncclientBuilder .standard() .withClientConfiguration(configuration) .withCredentials(awsstaticCredentialsProvider) .withExecutorFactory(() -> executorService); if (region == null) { region = Regions.DEFAULT_REGION.getName(); } if (isAwsClientMock()) { configuration.addHeader(AWS_REGION_HEADER,region); ec2AsyncclientBuilder.setClientConfiguration(configuration); AwsClientBuilder.EndpointConfiguration endpointConfiguration = new AwsClientBuilder.EndpointConfiguration( getAWSMockHost() + AWS_MOCK_EC2_ENDPOINT,region); ec2AsyncclientBuilder.setEndpointConfiguration(endpointConfiguration); } else { ec2AsyncclientBuilder.setRegion(region); } return (AmazonEC2Asyncclient) ec2AsyncclientBuilder.build(); }
public static void validateCredentials(AmazonEC2Asyncclient ec2Client,AWSClientManager clientManager,AuthCredentialsServiceState credentials,ComputeEnumerateAdapterRequest context,Operation op,Consumer<DescribeAvailabilityZonesResult> onSuccess,Consumer<Throwable> onFail) { if (clientManager.isEc2ClientInvalid(credentials,context.regionId)) { op.complete(); return; } ec2Client.describeAvailabilityZonesAsync(new DescribeAvailabilityZonesRequest(),new AsyncHandler<DescribeAvailabilityZonesRequest,DescribeAvailabilityZonesResult>() { @Override public void onError(Exception e) { if (e instanceof AmazonServiceException) { AmazonServiceException ase = (AmazonServiceException) e; if (ase.getStatusCode() == STATUS_CODE_UNAUTHORIZED) { clientManager.markEc2ClientInvalid(service,credentials,context.regionId); op.complete(); return; } onFail.accept(e); } } @Override public void onSuccess(DescribeAvailabilityZonesRequest request,DescribeAvailabilityZonesResult describeAvailabilityZonesResult) { onSuccess.accept(describeAvailabilityZonesResult); } }); }
/** * Synchronous UnTagging of one or many AWS resources with the provided tags. */ public static void unTagResources(AmazonEC2Asyncclient client,Collection<Tag> tags,String... resourceIds) { if (isAwsClientMock()) { return; } DeleteTagsRequest req = new DeleteTagsRequest() .withTags(tags) .withResources(resourceIds); client.deleteTags(req); }
/** * Synchronous Tagging of one or many AWS resources with the provided tags. */ public static void tagResources(AmazonEC2Asyncclient client,String... resourceIds) { if (isAwsClientMock()) { return; } CreateTagsRequest req = new CreateTagsRequest() .withResources(resourceIds).withTags(tags); client.createTags(req); }
public static List<TagDescription> getResourceTags(String resourceID,AmazonEC2Asyncclient client) { Filter resource = new Filter().withName(AWS_FILTER_RESOURCE_ID) .withValues(resourceID); DescribeTagsRequest req = new DescribeTagsRequest() .withFilters(resource); DescribeTagsResult result = client.describeTags(req); return result.getTags(); }
public static List<String> getorCreateDefaultSecurityGroup(AmazonEC2Asyncclient amazonEC2Client,AWSNicContext nicCtx) { AWSSecurityGroupClient client = new AWSSecurityGroupClient(amazonEC2Client); // in case no group is configured in the properties,attempt to discover the default one if (nicCtx != null && nicCtx.vpc != null) { try { SecurityGroup group = client.getSecurityGroup( DEFAULT_Security_GROUP_NAME,nicCtx.vpc.getVpcId()); if (group != null) { return Arrays.asList(group.getGroupId()); } } catch (AmazonServiceException t) { if (!t.getMessage().contains( DEFAULT_Security_GROUP_NAME)) { throw t; } } } // if the group doesn't exist an exception is thrown. We won't throw a // missing group exception // we will continue and create the group String groupId = client.createDefaultSecurityGroupWithDefaultRules(nicCtx.vpc); return Collections.singletonList(groupId); }
public static void waitForTransitionCompletion(ServiceHost host,List<InstanceStateChange> statechangelist,final String desiredState,BiConsumer<InstanceState,Exception> callback) { InstanceStateChange stateChange = statechangelist.get(0); try { DescribeInstancesRequest request = new DescribeInstancesRequest(); request.withInstanceIds(stateChange.getInstanceId()); DescribeInstancesResult result = client.describeInstances(request); Instance instance = result.getReservations() .stream() .flatMap(r -> r.getInstances().stream()) .filter(i -> i.getInstanceId() .equalsIgnoreCase(stateChange.getInstanceId())) .findFirst().orElseThrow(() -> new IllegalArgumentException( String.format("%s instance not found",stateChange.getInstanceId()))); String state = instance.getState().getName(); if (state.equals(desiredState)) { callback.accept(instance.getState(),null); } else { host.schedule(() -> waitForTransitionCompletion(host,statechangelist,callback),5,TimeUnit.SECONDS); } } catch (AmazonServiceException | IllegalArgumentException ase) { callback.accept(null,ase); } }
@Override public void handlePatch(Operation op) { if (!op.hasBody()) { op.fail(new IllegalArgumentException("body is required")); return; } ResourceOperationRequest request = op.getBody(ResourceOperationRequest.class); op.complete(); logInfo("Handle operation %s for compute %s.",request.resourceLink()); if (request.isMockRequest) { updateComputeState(request,(t) -> c.taskManager.patchTaskToFailure(t)); if (client != null) { reboot(client,it implies the task is already patched to // failure in the catch block of getorCreateEC2Client method (failConsumer.accept()). // So it is not required to patch it again. }); } }
/** * start the instance and on success updates the disk and compute state to reflect the detach information. */ private void startInstance(AmazonEC2Asyncclient client,diskContext c,DeferredResult<diskContext> dr) { StartInstancesRequest startRequest = new StartInstancesRequest(); startRequest.withInstanceIds(c.baseAdapterContext.child.id); client.startInstancesAsync(startRequest,StartInstancesResult>() { @Override protected void handleError(Exception e) { dr.fail(e); } @Override protected void handleSuccess(StartInstancesRequest request,e) -> { if (e != null) { dr.fail(e); return; } logInfo(() -> String.format( "[AWSComputediskDay2Service] Successfully started the " + "instance %s",result.getStartingInstances().get(0).getInstanceId())); updateComputeAnddiskState(dr,c); }); } }); }
public static void setUpTestVolume(VerificationHost host,boolean isMock) { if (!isMock) { String volumeId = createVolume(host,client); awsTestContext.put(disK_KEY,volumeId); String snapshotId = createSnapshot(host,volumeId); awsTestContext.put(SNAPSHOT_KEY,snapshotId); } }
public static void setUpTestVpc(AmazonEC2Asyncclient client,boolean isMock,String zoneId) { awsTestContext.put(VPC_KEY,AWS_DEFAULT_VPC_ID); awsTestContext.put(NIC_Specs_KEY,SINGLE_NIC_SPEC); awsTestContext.put(subnet_KEY,AWS_DEFAULT_subnet_ID); awsTestContext.put(Security_GROUP_KEY,AWS_DEFAULT_GROUP_ID); // create new VPC,subnet,InternetGateway if the default VPC doesn't exist if (!isMock && !vpcIdExists(client,AWS_DEFAULT_VPC_ID)) { String vpcId = createVPC(client,AWS_DEFAULT_VPC_CIDR); awsTestContext.put(VPC_KEY,vpcId); String subnetId = createOrGetsubnet(client,AWS_DEFAULT_VPC_CIDR,zoneId); awsTestContext.put(subnet_KEY,subnetId); String internetGatewayId = createInternetGateway(client); awsTestContext.put(INTERNET_GATEWAY_KEY,internetGatewayId); attachInternetGateway(client,internetGatewayId); awsTestContext.put(Security_GROUP_KEY,new AWSSecurityGroupClient(client) .createDefaultSecurityGroup(vpcId)); NetSpec network = new NetSpec(vpcId,AWS_DEFAULT_VPC_CIDR); List<NetSpec> subnets = new ArrayList<>(); subnets.add(new NetSpec(subnetId,AWS_DEFAULT_subnet_NAME,AWS_DEFAULT_subnet_CIDR,zoneId == null ? TestAWSSetupUtils.zoneId + avalabilityZoneIdentifier : zoneId)); NicSpec nicSpec = NicSpec.create() .withsubnetSpec(subnets.get(0)) .withDynamicIpAssignment(); awsTestContext.put(NIC_Specs_KEY,new AwsNicSpecs(network,Collections.singletonList(nicSpec))); } }
/** * Return true if vpcId exists. */ public static boolean vpcIdExists(AmazonEC2Asyncclient client,String vpcId) { List<Vpc> vpcs = client.describeVpcs() .getVpcs() .stream() .filter(vpc -> vpc.getVpcId().equals(vpcId)) .collect(Collectors.toList()); return vpcs != null && !vpcs.isEmpty(); }
/** * Return true if volumeId exists. */ public static boolean volumeIdExists(AmazonEC2Asyncclient client,String volumeId) { List<Volume> volumes = client.describeVolumes() .getVolumes() .stream() .filter(volume -> volume.getVolumeId().equals(volumeId)) .collect(Collectors.toList()); return volumes != null && !volumes.isEmpty(); }
/** * Return true if snapshotId exists. */ public static boolean snapshotIdExists(AmazonEC2Asyncclient client,String snapshotId) { List<Snapshot> snapshots = client.describeSnapshots() .getSnapshots() .stream() .filter(snapshot -> snapshot.getSnapshotId().equals(snapshotId)) .collect(Collectors.toList()); return snapshots != null && !snapshots.isEmpty(); }
关于在Amazon EC2 Linux上启动spoon.sh时遇到麻烦和echo amazon使用的问题我们已经讲解完毕,感谢您的阅读,如果还想了解更多关于Amazon AWS EC2 Linux上的Node-gyp Connect-mongo、amazon-ec2 – 将rails 3.1应用程序部署到Amazon Ec2、amazon-web-services – 如何在amazon linux disto上安装nginx 1.9.15、com.amazonaws.services.ec2.AmazonEC2AsyncClient的实例源码等相关内容,可以在本站寻找。
本文标签: