GVKun编程网logo

Hive error log :FAILED: Execution Error, return code 137 from org.apache.hadoop.hive.ql.exec.mr.Mapr

9

在本文中,我们将给您介绍关于Hiveerrorlog:FAILED:ExecutionError,returncode137fromorg.apache.hadoop.hive.ql.exec.mr.

在本文中,我们将给您介绍关于Hive error log :FAILED: Execution Error, return code 137 from org.apache.hadoop.hive.ql.exec.mr.Mapr的详细内容,此外,我们还将为您提供关于Cause: org.apache.ibatis.executor.ExecutorException: Error getting generated key or setting resul...、Cursor fastexecutemany error: ('HY000', '[HY000] [Microsoft][SQL Server Native Client 11.0]Unicode 转换失败 (0) (SQLExecute)')、ERROR hive.HiveConfig: Could not load org.apache.hadoop.hive.conf.HiveConf. Make sure HIVE_CONF_D...、Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask的知识。

本文目录一览:

Hive error log :FAILED: Execution Error, return code 137 from org.apache.hadoop.hive.ql.exec.mr.Mapr

Hive error log :FAILED: Execution Error, return code 137 from org.apache.hadoop.hive.ql.exec.mr.Mapr

From below log is not easy to find the root reason ,any body know that ,thx


2018-10-22 03:45:41 INFO 2018-10-22 03:45:41,651 Stage-2(job_1540003897972_375058) map = 100%,  reduce = 99%, Cumulative CPU 22312.05 sec
2018-10-22 03:46:13 INFO 2018-10-22 03:46:13,311 Stage-2(job_1540003897972_375058) map = 100%,  reduce = 100%, Cumulative CPU 22352.18 sec
2018-10-22 03:46:16 INFO MapReduce Total cumulative CPU time: 0 days 6 hours 12 minutes 32 seconds 180 msec
2018-10-22 03:46:16 INFO Stage-2  Elapsed : 568540 ms  job_1540003897972_375058
2018-10-22 03:46:16 INFO Ended Job = job_1540003897972_375058
2018-10-22 03:46:18 INFO Execution failed with exit status: 137
2018-10-22 03:46:18 INFO Obtaining error information
2018-10-22 03:46:18 INFO 
2018-10-22 03:46:18 INFO Task failed!
2018-10-22 03:46:18 INFO Task ID:
2018-10-22 03:46:18 INFO Stage-20
2018-10-22 03:46:18 INFO 
2018-10-22 03:46:18 INFO Logs:
2018-10-22 03:46:18 INFO 
2018-10-22 03:46:18 INFO /data0/Logs/dd_edw/hive-0.12.0/hive.log
2018-10-22 03:46:18 INFO FAILED: Execution Error, return code 137 from org.apache.hadoop.hive.ql.exec.mr.MapredLocalTask
2018-10-22 03:46:18 INFO MapReduce Jobs Launched:
2018-10-22 03:46:19 INFO Stage-1: job_1540003897972_373806 SUCCESS HDFS Read: 2.583 GB HDFS Write: 18.235 GB Elapsed : 4m19s318ms
2018-10-22 03:46:19 INFO Map: Total: 206 Success: 206 Killed: 3 Failed: 0 avgMapTime: 1m6s436ms
2018-10-22 03:46:19 INFO Reduce: Total: 1000 Success: 1000 Killed: 4 Failed: 0 avgReduceTime: 7s423ms avgShuffleTime: 10s628ms avgMergeTime: 1s414ms

 

 

 

Cause: org.apache.ibatis.executor.ExecutorException: Error getting generated key or setting resul...

Cause: org.apache.ibatis.executor.ExecutorException: Error getting generated key or setting resul...

mybatis 插入数据时报错:

Cause: org.apache.ibatis.executor.ExecutorException: Error getting generated key or setting result to parameter object. Cause: java.sql.SQLException: 不支持的特性
at org.apache.ibatis.exceptions.ExceptionFactory.wrapException(ExceptionFactory.java:8)

原因:加了如下标红的设置(useGeneratedKeys="true" 把新增加的主键赋值到自己定义的 keyProperty(id)中)

<insert id="insert" parameterType="com.vimtech.bms.business.domain.monitor.finan.AssetsVisitReportWithBLOBs" useGeneratedKeys="true" keyProperty="serialid">

解决方法一:没什么用的话,删除标红的即可;

解决方法二:用 selectKey 先查下自增的主键 ID 值然后赋给相应的主键 ID 即可

oracle 的写法 (查序列的下一个值然后赋值):

<selectKey resultType="java.lang.Long" order="BEFORE" keyProperty="###">
  SELECT SEQ_ASSETS_VISIT_REPORT.nextval AS ### FROM dual
</selectKey>

SQLServer 的写法

<selectKey resultType="java.lang.Integer" keyProperty="timelineConfigId">
  SELECT @@IDENTITY AS TIMELINE_CONFIG_ID
</selectKey>

 

Cursor fastexecutemany error: ('HY000', '[HY000] [Microsoft][SQL Server Native Client 11.0]Unicode 转换失败 (0) (SQLExecute)')

Cursor fastexecutemany error: ('HY000', '[HY000] [Microsoft][SQL Server Native Client 11.0]Unicode 转换失败 (0) (SQLExecute)')

如何解决Cursor fastexecutemany error: (''HY000'', ''[HY000] [Microsoft][SQL Server Native Client 11.0]Unicode 转换失败 (0) (SQLExecute)'')?

我正在尝试使用游标 fast_executemany 将几个大型 csv 文件上传到 sql server 数据库。下面是代码示例。

  if item.endswith(zip_ext):
      file_name = os.path.abspath(item)
      zip_ref = zipfile.ZipFile(file_name)
      zip_ref.extractall(directory)
      zip_ref.close()
      os.remove(file_name)

for item in os.listdir(directory): # Load and Edit CSV
   if item.endswith(ext):
       df = pd.read_csv(item)
       df.rename(columns={df.columns[0]:''InvoiceNo'',df.columns[2]:''OrderNo'',df.columns[20]:''Syscode'',df.columns[27]:''SpotDate'',df.columns[28]:''Network'',df.columns[30]:''Spottime'',df.columns[29]:''SpotLength'',df.columns[31]:''Program'',df.columns[32]:''SpotName'',df.columns[21]:''Source''},inplace=True)
      
       df[''BillDate'']=''2021-03-01'' # Enter Preferred Bill Date Here!
       df[''FileName'']=str(item)
       df[[''SpotDate'',''BillDate'',''Spottime'']]=df[[''SpotDate'',''Spottime'']].apply(pd.to_datetime)
       df[''Spottime'']=df[''Spottime''].dt.time
       df[''OrderNo'']=df[''OrderNo''].apply(lambda x: '''' if x == ''NULL'' else x)
       df = df[[''InvoiceNo'',''OrderNo'',''Syscode'',''SpotDate'',''Network'',''Spottime'',''SpotLength'',''Program'',''SpotName'',''Source'',''FileName'']]
       
# Connect to sql Server
       conn = pyodbc.connect(''DRIVER=sql Server Native Client 11.0;''
                             ''SERVER=PWDBS006sql;'' #UPDATED 2/4/21
                             ''DATABASE=Marketing Cross-Channel Affidavits;''
                             ''Trusted_Connection=yes;'',autocommit=True)
       crsr = conn.cursor()
       crsr.fast_executemany = False
       
# Insert Df to sql
       sql_statement = ''''''INSERT INTO dbo.SpectrumReach_Marketing (InvoiceNo,OrderNo,BillDate,Syscode,SpotDate,Network,Spottime,SpotLength,Program,SpotName,Source,FileName)
                        VALUES (?,?,?)''''''
       list_of_tuples = list(df.itertuples(index=False))
       crsr.executemany(sql_statement,list_of_tuples)
       
       crsr.close()
       conn.close() 

运行此代码时,我收到错误消息:(''HY000'',''[HY000] [Microsoft][sql Server Native Client 11.0]Unicode 转换失败 (0) (sqlExecute)''。

我已经将此代码用于来自几个来源的几个大型数据集,它们的格式应该完全相同,并且它有效,但对于特定供应商,它会中断并显示上述错误。此外,当我有 fast_executemany = False 时,此代码会运行,但由于我尝试传输的文件的大小,让数据上传这么慢是不可行的。

任何帮助将不胜感激!

解决方法

暂无找到可以解决该程序问题的有效方法,小编努力寻找整理中!

如果你已经找到好的解决方法,欢迎将解决方案带上本链接一起发送给小编。

小编邮箱:dio#foxmail.com (将#修改为@)

ERROR hive.HiveConfig: Could not load org.apache.hadoop.hive.conf.HiveConf. Make sure HIVE_CONF_D...

ERROR hive.HiveConfig: Could not load org.apache.hadoop.hive.conf.HiveConf. Make sure HIVE_CONF_D...

Sqoop 导入 mysql 表中的数据到 hive,出现如下错误: 

 ERROR hive.HiveConfig: Could not load org.apache.hadoop.hive.conf.HiveConf. Make sure HIVE_CONF_DIR is set correctly.
命令如下:
./sqoop import --connect jdbc:mysql://slave2:3306/mysql --username root --password aaa --table people --hive-import --hive-overwrite --hive-table people --fields-terminated-by ''\t'';

 解决方法:

往 /etc/profile 最后加入  export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$HIVE_HOME/lib/*
然后刷新配置,source /etc/profile

 

Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask

Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask

Showing 4096 bytes of 17167 total. Click here for the full log.

.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:542)
17/01/20 09:39:23 ERROR client.RemoteDriver: Shutting down remote driver due to error: java.lang.InterruptedException
java.lang.InterruptedException
	at java.lang.Object.wait(Native Method)
	at org.apache.spark.scheduler.TaskSchedulerImpl.waitBackendReady(TaskSchedulerImpl.scala:623)
	at org.apache.spark.scheduler.TaskSchedulerImpl.postStartHook(TaskSchedulerImpl.scala:170)
	at org.apache.spark.scheduler.cluster.YarnClusterScheduler.postStartHook(YarnClusterScheduler.scala:33)
	at org.apache.spark.SparkContext.<init>(SparkContext.scala:595)
	at org.apache.spark.api.java.JavaSparkContext.<init>(JavaSparkContext.scala:59)
	at org.apache.hive.spark.client.RemoteDriver.<init>(RemoteDriver.java:169)
	at org.apache.hive.spark.client.RemoteDriver.main(RemoteDriver.java:556)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:606)
	at org.apache.spark.deploy.yarn.ApplicationMaster$$anon$2.run(ApplicationMaster.scala:542)
17/01/20 09:39:23 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with FAILED (diag message: Uncaught exception: org.apache.hadoop.yarn.exceptions.InvalidResourceRequestException: Invalid resource request, requested virtual cores < 0, or requested virtual cores > max configured, requestedVirtualCores=4, maxVirtualCores=2
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.validateResourceRequest(SchedulerUtils.java:258)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndValidateRequest(SchedulerUtils.java:226)
	at org.apache.hadoop.yarn.server.resourcemanager.scheduler.SchedulerUtils.normalizeAndvalidateRequest(SchedulerUtils.java:233)
	at org.apache.hadoop.yarn.server.resourcemanager.RMServerUtils.normalizeAndValidateRequests(RMServerUtils.java:97)
	at org.apache.hadoop.yarn.server.resourcemanager.ApplicationMasterService.allocate(ApplicationMasterService.java:504)
	at org.apache.hadoop.yarn.api.impl.pb.service.ApplicationMasterProtocolPBServiceImpl.allocate(ApplicationMasterProtocolPBServiceImpl.java:60)
	at org.apache.hadoop.yarn.proto.ApplicationMasterProtocol$ApplicationMasterProtocolService$2.callBlockingMethod(ApplicationMasterProtocol.java:99)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2086)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2082)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:415)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1698)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2080)
)
17/01/20 09:39:23 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
17/01/20 09:39:24 INFO yarn.ApplicationMaster: Deleting staging directory .sparkStaging/application_1484288256809_0021
17/01/20 09:39:24 INFO storage.DiskBlockManager: Shutdown hook called
17/01/20 09:39:24 INFO util.ShutdownHookManager: Shutdown hook called
17/01/20 09:39:24 INFO util.ShutdownHookManager: Deleting directory /yarn/nm/usercache/anonymous/appcache/application_1484288256809_0021/spark-3f3ac5b0-5a46-48d7-929b-81b7820c9e81/userFiles-af94b1af-604f-4423-b1e4-0384e372c1f8
17/01/20 09:39:24 INFO util.ShutdownHookManager: Deleting directory /yarn/nm/usercache/anonymous/appcache/application_1484288256809_0021/spark-3f3ac5b0-5a46-48d7-929b-81b7820c9e81

关于Hive error log :FAILED: Execution Error, return code 137 from org.apache.hadoop.hive.ql.exec.mr.Mapr的问题我们已经讲解完毕,感谢您的阅读,如果还想了解更多关于Cause: org.apache.ibatis.executor.ExecutorException: Error getting generated key or setting resul...、Cursor fastexecutemany error: ('HY000', '[HY000] [Microsoft][SQL Server Native Client 11.0]Unicode 转换失败 (0) (SQLExecute)')、ERROR hive.HiveConfig: Could not load org.apache.hadoop.hive.conf.HiveConf. Make sure HIVE_CONF_D...、Error, return code 1 from org.apache.hadoop.hive.ql.exec.spark.SparkTask等相关内容,可以在本站寻找。

本文标签: