关于如何对Oracle中不包括字符串值的列中的所有值求和?和oracle不包含字符串的问题就给大家分享到这里,感谢你花时间阅读本站内容,更多关于oracleogg单实例双向复制搭建(oracle-or
关于如何对 Oracle 中不包括字符串值的列中的所有值求和?和oracle 不包含字符串的问题就给大家分享到这里,感谢你花时间阅读本站内容,更多关于oracle ogg 单实例双向复制搭建(oracle-oracle)--Oracle GoldenGate、[译] 如何对 Angular Controller 进行单元测试、【DBA从入门到实践】第七期:如何对 OceanBase 进行 SQL 诊断和调优?、【MySQL】如何对 SQL 语句进行跟踪(trace)?等相关知识的信息别忘了在本站进行查找喔。
本文目录一览:- 如何对 Oracle 中不包括字符串值的列中的所有值求和?(oracle 不包含字符串)
- oracle ogg 单实例双向复制搭建(oracle-oracle)--Oracle GoldenGate
- [译] 如何对 Angular Controller 进行单元测试
- 【DBA从入门到实践】第七期:如何对 OceanBase 进行 SQL 诊断和调优?
- 【MySQL】如何对 SQL 语句进行跟踪(trace)?
如何对 Oracle 中不包括字符串值的列中的所有值求和?(oracle 不包含字符串)
如何解决如何对 Oracle 中不包括字符串值的列中的所有值求和?
我有一个表,我想计算两个字段的总和。但是,由于列中有不同的数据类型,因此在尝试对值求和时会导致错误。 我正在寻找一种方法来添加列中不包括非数字值的所有数字值。
这是我的桌子:
id | value | stock
1,-,45
1,30,45
2,-
2,-
3,400,55
3,60
4,404,55
这是我希望输出的样子:
id | value_total | stock_total
1,800,115
4,45
这是我的代码:
SELECT id,SUM(NVL(value_total,0)) AS volume_total,SUM(NVL(stock_total,0)) AS stock_total
FROM table1
group by id
我收到此错误:
ora-01722: invalid number
01722. 00000 - "invalid number"
*Cause: The specified number was invalid.
*Action: Specify a valid number.
我的假设是错误来自列中的“-”值。在字段中添加所有数值但排除“-”值的任何提示或建议?
解决方法
您可以尝试使用 CASE
表达式和 regexp_like()
仅将表示数字(十进制表示法)的字符串传递给 to_number()
,否则返回 0
。类似的东西:
SELECT sum(CASE
WHEN regexp_like(value_total,''^(\\+|-)?[0-9]*((.)?[0-9])[0-9]*$'') THEN
to_number(value_total)
ELSE
0
END) value_total
FROM table1;
但理想情况下,您修复架构并为列使用适当的数据类型,即一些 number
变体。
将 to_number()
与 on conversion error
一起使用:
SELECT id,SUM(TO_NUMBER(value_total DEFAULT 0 ON CONVERSION ERROR) AS volume_total,SUM(TO_NUMBER(stock_total DEFAULT 0 ON CONVERSION ERROR) AS stock_total
FROM table1
GROUP BY id;
oracle ogg 单实例双向复制搭建(oracle-oracle)--Oracle GoldenGate
oracle ogg 单实例双向复制搭建(oracle-oracle)--Oracle GoldenGate
--继昨天的测试,这一篇实施单实例双向复制(完全重新搭建)
--环境不变
db1,db2(单实例)
10.1*.1*
orcl,ogg
centos 6.5,centos 6.5
11.2.0.4,11.2.0.4
1 检查归档,日志模式(orcl,ogg)
SCOTT@ orcl >conn / as sysdba
Connected.
SYS@ orcl >select NAME,OPEN_MODE,FORCE_LOGGING,SUPPLEMENTAL_LOG_DATA_MIN from v$database;
NAME OPEN_MODE FOR SUPPLEME
--------- -------------------- --- --------
ORCL READ WRITE YES YES
SYS@ orcl >archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 21
Next log sequence to archive 23
Current log sequence 23
SYS@ orcl >alter system switch logfile;
System altered.
1 row selected.
SCOTT@ ogg >conn / as sysdba
Connected.
SYS@ ogg >select NAME,OPEN_MODE,FORCE_LOGGING,SUPPLEMENTAL_LOG_DATA_MIN from v$database;
NAME OPEN_MODE FOR SUPPLEME
--------- -------------------- --- --------
OGG READ WRITE YES YES
SYS@ ogg >archive log list;
Database log mode Archive Mode
Automatic archival Enabled
Archive destination USE_DB_RECOVERY_FILE_DEST
Oldest online log sequence 74
Next log sequence to archive 76
Current log sequence 76
SYS@ ogg >alter system switch logfile;
System altered.
2 创建OGG表空间和OGG用户---两个服务器都要做之前已经执行过 (orcl,ogg)
---运行 OGG 支持 DDL 脚本
如果要让 OGG 支持 DDL 操作,还需要额外运行几个脚本,这些脚本是 OGG 带的而不是 ORACLE 带的,源端与目标端都需要
grant CONNECT, RESOURCE to ogg;
grant SELECT ANY DICTIONARY, SELECT ANY TABLE to ogg;
grant ALTER ANY TABLE to ogg;
grant FLASHBACK ANY TABLE to ogg;
grant EXECUTE on DBMS_FLASHBACK to ogg;
grant insert any table to ogg;
grant update any table to ogg;
grant delete any table to ogg;
GRANT EXECUTE ON UTL_FILE TO ogg;
GRANT CREATE TABLE,CREATE SEQUENCE TO ogg;
grant create any table to ogg;
grant create any view to ogg;
grant create any procedure to ogg;
grant create any sequence to ogg;
grant create any index to ogg;
grant create any trigger to ogg;
grant create any view to ogg;
[oracle@ogg ~]$ cd /u01/app/ogg
[oracle@ogg ogg]$ sqlplus / as sysdba
---SYS@ orcl >@/u01/app/ogg/marker_setup.sql
---SYS@ orcl >@/u01/app/ogg/ddl_setup.sql
---SYS@ orcl >@/u01/app/ogg/role_setup.sql
---SYS@ orcl >@/u01/app/ogg/ddl_enable.sql
如果安装过程中报错
SYS@ orcl >@/u01/app/ogg/ddl_setup.sql
Line/pos Error
---------------------------------------- -----------------------------------------------------------------
126/9 PL/SQL: SQL Statement ignored
128/23 PL/SQL: ORA-00942: table or view does not exist
133/21 PL/SQL: ORA-02289: sequence does not exist
133/5 PL/SQL: SQL Statement ignored
657/14 PLS-00905: object OGG.DDLAUX is invalid
657/5 PL/SQL: Statement ignored
919/25 PL/SQL: ORA-00942: table or view does not exist
919/4 PL/SQL: SQL Statement ignored
###卸载ogg,并使支持DDL功能失效


---SYS@ orcl >@/u01/app/ogg/ddl_disable.sql
SYS@ orcl >@/u01/app/ogg/ddl_disable.sql
Trigger altered.
SYS@ orcl >@/u01/app/ogg/ddl_remove.sql
DDL replication removal script.
WARNING: this script removes all DDL replication objects and data.
You will be prompted for the name of a schema for the Oracle GoldenGate database objects.
NOTE: The schema must be created prior to running this script.
Enter Oracle GoldenGate schema name:scott
Working, please wait ...
Spooling to file ddl_remove_spool.txt
Script complete.
SYS@ orcl >@/u01/app/ogg/marker_remove.sql
Marker removal script.
WARNING: this script removes all marker objects and data.
You will be prompted for the name of a schema for the Oracle GoldenGate database objects.
NOTE: The schema must be created prior to running this script.
Enter Oracle GoldenGate schema name:scott
PL/SQL procedure successfully completed.
Sequence dropped.
Table dropped.
Script complete.
--检查相应的权限,在ogg脚本下面登录/u01/app/ogg


SQL> @/u01/app/ogg/marker_setup.sql
Marker setup script
You will be prompted for the name of a schema for the Oracle GoldenGate database objects.
NOTE: The schema must be created prior to running this script.
NOTE: Stop all DDL replication before starting this installation.
Enter Oracle GoldenGate schema name:ogg
Marker setup table script complete, running verification script...
Please enter the name of a schema for the GoldenGate database objects:
Setting schema name to OGG
MARKER TABLE
-------------------------------
OK
MARKER SEQUENCE
-------------------------------
OK
Script complete.
SQL> @/u01/app/ogg/ddl_setup.sql
Oracle GoldenGate DDL Replication setup script
Verifying that current user has privileges to install DDL Replication...
You will be prompted for the name of a schema for the Oracle GoldenGate database objects.
NOTE: For an Oracle 10g source, the system recycle bin must be disabled. For Oracle 11g and later, it can be enabled.
NOTE: The schema must be created prior to running this script.
NOTE: Stop all DDL replication before starting this installation.
Enter Oracle GoldenGate schema name:ogg
Working, please wait ...
Spooling to file ddl_setup_spool.txt
Checking for sessions that are holding locks on Oracle Golden Gate metadata tables ...
Check complete.
Using OGG as a Oracle GoldenGate schema name.
Working, please wait ...
DDL replication setup script complete, running verification script...
Please enter the name of a schema for the GoldenGate database objects:
Setting schema name to OGG
CLEAR_TRACE STATUS:
Line/pos Error
---------------------------------------- -----------------------------------------------------------------
No errors No errors
CREATE_TRACE STATUS:
Line/pos Error
---------------------------------------- -----------------------------------------------------------------
No errors No errors
TRACE_PUT_LINE STATUS:
Line/pos Error
---------------------------------------- -----------------------------------------------------------------
No errors No errors
INITIAL_SETUP STATUS:
Line/pos Error
---------------------------------------- -----------------------------------------------------------------
No errors No errors
DDLVERSIONSPECIFIC PACKAGE STATUS:
Line/pos Error
---------------------------------------- -----------------------------------------------------------------
No errors No errors
DDLREPLICATION PACKAGE STATUS:
Line/pos Error
---------------------------------------- -----------------------------------------------------------------
No errors No errors
DDLREPLICATION PACKAGE BODY STATUS:
Line/pos Error
---------------------------------------- -----------------------------------------------------------------
No errors No errors
DDL IGNORE TABLE
-----------------------------------
OK
DDL IGNORE LOG TABLE
-----------------------------------
OK
DDLAUX PACKAGE STATUS:
Line/pos Error
---------------------------------------- -----------------------------------------------------------------
No errors No errors
DDLAUX PACKAGE BODY STATUS:
Line/pos Error
---------------------------------------- -----------------------------------------------------------------
No errors No errors
SYS.DDLCTXINFO PACKAGE STATUS:
Line/pos Error
---------------------------------------- -----------------------------------------------------------------
No errors No errors
SYS.DDLCTXINFO PACKAGE BODY STATUS:
Line/pos Error
---------------------------------------- -----------------------------------------------------------------
No errors No errors
DDL HISTORY TABLE
-----------------------------------
OK
DDL HISTORY TABLE(1)
-----------------------------------
OK
DDL DUMP TABLES
-----------------------------------
OK
DDL DUMP COLUMNS
-----------------------------------
OK
DDL DUMP LOG GROUPS
-----------------------------------
OK
DDL DUMP PARTITIONS
-----------------------------------
OK
DDL DUMP PRIMARY KEYS
-----------------------------------
OK
DDL SEQUENCE
-----------------------------------
OK
GGS_TEMP_COLS
-----------------------------------
OK
GGS_TEMP_UK
-----------------------------------
OK
DDL TRIGGER CODE STATUS:
Line/pos Error
---------------------------------------- -----------------------------------------------------------------
No errors No errors
DDL TRIGGER INSTALL STATUS
-----------------------------------
OK
DDL TRIGGER RUNNING STATUS
------------------------------------------------------------------------------------------------------------------------
ENABLED
STAYMETADATA IN TRIGGER
------------------------------------------------------------------------------------------------------------------------
OFF
DDL TRIGGER SQL TRACING
------------------------------------------------------------------------------------------------------------------------
0
DDL TRIGGER TRACE LEVEL
------------------------------------------------------------------------------------------------------------------------
0
LOCATION OF DDL TRACE FILE
------------------------------------------------------------------------------------------------------------------------
/u01/app/oracle/diag/rdbms/ogg/ogg/trace/ggs_ddl_trace.log
Analyzing installation status...
STATUS OF DDL REPLICATION
------------------------------------------------------------------------------------------------------------------------
SUCCESSFUL installation of DDL Replication software components
Script complete.
SQL> @/u01/app/ogg/role_setup.sql
GGS Role setup script
This script will drop and recreate the role GGS_GGSUSER_ROLE
To use a different role name, quit this script and then edit the params.sql script to change the gg_role parameter to the preferred name. (Do not run the script.)
You will be prompted for the name of a schema for the GoldenGate database objects.
NOTE: The schema must be created prior to running this script.
NOTE: Stop all DDL replication before starting this installation.
Enter GoldenGate schema name:ogg
Wrote file role_setup_set.txt
PL/SQL procedure successfully completed.
Role setup script complete
Grant this role to each user assigned to the Extract, GGSCI, and Manager processes, by using the following SQL command:
GRANT GGS_GGSUSER_ROLE TO <loggedUser>
where <loggedUser> is the user assigned to the GoldenGate processes.
SQL> @/u01/app/ogg/ddl_enable.sql
Trigger altered.
3 ogg配置
1 创建 OGG 的管理目录(orcl,ogg)
GGSCI (DSI) 1> create subdirs
Creating subdirectories under current directory /u01/app/ogg
Parameter files /u01/app/ogg/dirprm: already exists
Report files /u01/app/ogg/dirrpt: created
Checkpoint files /u01/app/ogg/dirchk: created
Process status files /u01/app/ogg/dirpcs: created
SQL script files /u01/app/ogg/dirsql: created
Database definitions files /u01/app/ogg/dirdef: created
Extract data files /u01/app/ogg/dirdat: created
Temporary files /u01/app/ogg/dirtmp: created
Stdout files /u01/app/ogg/dirout: created
2 添加表级 TRANDATA(orcl,ogg)


GGSCI (DSI) 2> DBLOGIN USERID ogg, PASSWORD ogg
Successfully logged into database.
GGSCI (DSI) 3> add trandata scott.emp_ogg
Logging of supplemental redo log data is already enabled for table SCOTT.EMP_OGG.
GGSCI (DSI) 4> add trandata scott.dept_ogg
Logging of supplemental redo log data is already enabled for table SCOTT.DEPT_OGG.
GGSCI (DSI) 5> add trandata scott.dept
Logging of supplemental redo log data is already enabled for table SCOTT.DEPT.
GGSCI (DSI) 6> add trandata scott.emp;
ERROR: No viable tables matched specification.
GGSCI (DSI) 7> INFO TRANDATA scott.*
Logging of supplemental redo log data is disabled for table SCOTT.BONUS.
Logging of supplemental redo log data is enabled for table SCOTT.DEPT.
Columns supplementally logged for table SCOTT.DEPT: DEPTNO.
Logging of supplemental redo log data is enabled for table SCOTT.DEPT_OGG.
Columns supplementally logged for table SCOTT.DEPT_OGG: DEPTNO.
Logging of supplemental redo log data is enabled for table SCOTT.EMP.
Columns supplementally logged for table SCOTT.EMP: EMPNO.
Logging of supplemental redo log data is enabled for table SCOTT.EMP_OGG.
Columns supplementally logged for table SCOTT.EMP_OGG: EMPNO.
Logging of supplemental redo log data is disabled for table SCOTT.SALGRADE.
3 数据初始化(orcl)
SYS@ orcl >create directory dump_file_dir as ''/u01/app/oracle/dump'';
Directory created.
[oracle@DSI oracle]$ mkdir -p /u01/app/oracle/dump
[oracle@DSI oracle]$ expdp scott/*@*/orcl schemas=scott directory=dump_file_dir dumpfile=scott_schemas_20190620.dmp logfile=scott_schemas_20190620.log
[oracle@ogg ogg]$ export ORACLE_SID=ogg
[oracle@ogg ogg]$ mkdir -p /u01/app/oracle/dump
[oracle@DSI dump]$ scp scott_schemas_20190620.dmp oracle@*:/u01/app/oracle/dump/.
[oracle@ogg dump]$ impdp scott/*@*/ogg directory=dump_file_dir dumpfile=scott_schemas_20190620.dmp logfile=scott_schemas_20190620.log
1 配置mgr主进程组(orcl,ogg)
> edit params mgr
port 7839
DYNAMICPORTLIST 7840-7850
AUTOSTART EXTRACT *
AUTORESTART EXTRACT *, RETRIES 5, WAITMINUTES 3
PURGEOLDEXTRACTS ./dirdat/*,usecheckpoints, minkeepdays 7
LAGREPORTHOURS 1
LAGINFOMINUTES 30
LAGCRITICALMINUTES 45
2 配置Extract进程组(orcl,ogg)
> add extract ext1, tranlog, begin now
> add EXTTRAIL ./dirdat/r1, extract ext1,megabytes 100
> edit param ext1
EXTRACT ext1
SETENV (NLS_LANG=AMERICAN_AMERICA.AL32UTF8)
userid ogg,password ogg
REPORTCOUNT EVERY 1 MINUTES, RATE
numfiles 5000
DISCARDFILE ./dirrpt/ext1.dsc,APPEND,MEGABYTES 1024
DISCARDROLLOVER AT 3:00
exttrail ./dirdat/r1,megabytes 100
dynamicresolution
TRANLOGOPTIONS DISABLESUPPLOGCHECK
GetTruncates
TranLogOptions ExcludeUser ogg
--DDL Include All
DDL &
INCLUDE MAPPED OBJTYPE ''table'' &
INCLUDE MAPPED OBJTYPE ''index'' &
EXCLUDE OPTYPE COMMENT
DDLOptions AddTranData Retryop Retrydelay 10 Maxretries 10
TABLE scott.EMP_OGG;
TABLE scott.DEPT_OGG;
TABLE scott.DEPT;
3 配置pump进程组(orcl,ogg)
> edit param pump1
extract pump1
SETENV (NLS_LANG=AMERICAN_AMERICA.AL32UTF8)
userid ogg,password ogg
dynamicresolution
passthru
rmthost *, mgrport 7839, compress
rmttrail ./dirdat/t1
numfiles 5000
TABLE scott.EMP_OGG;
TABLE scott.DEPT_OGG;
TABLE scott.DEPT;
> add extract pump1 ,exttrailsource ./dirdat/r1,begin now
> add rmttrail ./dirdat/t1,extract pump1, MEGABYTES 5
4 添加检查表(orcl,ogg)
> edit params ./GLOBALS
GGSchema ogg
CHECKPOINTTABLE ogg.ggschkpt
> exit
> dblogin userid ogg,password ogg
> ADD CHECKPOINTTABLE
5 配置replicat进程组(orcl,ogg)
> edit param rep1
REPLICAT rep1
SETENV (NLS_LANG=AMERICAN_AMERICA.AL32UTF8)
USERID ogg,PASSWORD ogg
REPORTCOUNT EVERY 30 MINUTES, RATE
REPERROR DEFAULT, ABEND
numfiles 5000
assumetargetdefs
DISCARDFILE ./dirrpt/rep1.dsc, APPEND, MEGABYTES 1000
DISCARDROLLOVER AT 3:00
ALLOWNOOPUPDATES
DBOPTIONS DEFERREFCONST
dynamicresolution
assumetargetdefs
reperror default,discard
MAP scott.emp_ogg, TARGET scott.emp_ogg;
MAP scott.dept_ogg, TARGET scott.dept_ogg;
MAP scott.dept, TARGET scott.dept;
> add replicat rep1,exttrail ./dirdat/t1,checkpointtable ogg.ggschkpt
> start rep1
--orcl(先测试单向)-在orcl端启动start ext1, start pump1
GGSCI (DSI) 5> info all
Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING EXT1 00:19:12 00:00:09
EXTRACT RUNNING PUMP1 00:00:00 00:10:20
--ogg --在ogg端启动start rep1
GGSCI (ogg) 6> info all
Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
REPLICAT RUNNING REP1 00:00:00 00:00:01
--单向测试


SYS@ orcl >conn scott/tiger
Connected.
SCOTT@ orcl >update emp_ogg set ename=''hq_orcl_1'' where empno=7934;
1 row updated.
SCOTT@ orcl >commit;
Commit complete.
GGSCI (DSI) 6> stats pump1
Sending STATS request to EXTRACT PUMP1 ...
Start of Statistics at 2019-06-20 15:40:11.
Output to ./dirdat/t1:
Extracting from SCOTT.EMP_OGG to SCOTT.EMP_OGG:
*** Total statistics since 2019-06-20 15:39:37 ***
Total inserts 0.00
Total updates 1.00
Total deletes 0.00
Total discards 0.00
Total operations 1.00
*** Daily statistics since 2019-06-20 15:39:37 ***
Total inserts 0.00
Total updates 1.00
Total deletes 0.00
Total discards 0.00
Total operations 1.00
*** Hourly statistics since 2019-06-20 15:39:37 ***
Total inserts 0.00
Total updates 1.00
Total deletes 0.00
Total discards 0.00
Total operations 1.00
*** Latest statistics since 2019-06-20 15:39:37 ***
Total inserts 0.00
Total updates 1.00
Total deletes 0.00
Total discards 0.00
Total operations 1.00
End of Statistics.
GGSCI (ogg) 7> stats rep1
Sending STATS request to REPLICAT REP1 ...
Start of Statistics at 2019-06-20 15:40:21.
Replicating from SCOTT.EMP_OGG to SCOTT.EMP_OGG:
*** Total statistics since 2019-06-20 15:39:42 ***
No database operations have been performed.
*** Daily statistics since 2019-06-20 15:39:42 ***
No database operations have been performed.
*** Hourly statistics since 2019-06-20 15:39:42 ***
No database operations have been performed.
*** Latest statistics since 2019-06-20 15:39:42 ***
No database operations have been performed.
End of Statistics.
GGSCI (DSI) 7> info pump1,detail
EXTRACT PUMP1 Last Started 2019-06-20 15:37 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:03 ago)
Log Read Checkpoint File ./dirdat/r1000000
2019-06-20 15:39:35.000000 RBA 1153
Target Extract Trails:
Remote Trail Name Seqno RBA Max MB
./dirdat/t1 0 1183 5
Extract Source Begin End
./dirdat/r1000000 2019-06-20 15:27 2019-06-20 15:39
./dirdat/r1000000 * Initialized * 2019-06-20 15:27
Current directory /u01/app/ogg
Report file /u01/app/ogg/dirrpt/PUMP1.rpt
Parameter file /u01/app/ogg/dirprm/pump1.prm
Checkpoint file /u01/app/ogg/dirchk/PUMP1.cpe
Process file /u01/app/ogg/dirpcs/PUMP1.pce
Stdout file /u01/app/ogg/dirout/PUMP1.out
Error log /u01/app/ogg/ggserr.log
GGSCI (ogg) 8> info rep1,detail
REPLICAT REP1 Last Started 2019-06-20 15:38 Status RUNNING
Checkpoint Lag 00:00:00 (updated 00:00:04 ago)
Log Read Checkpoint File ./dirdat/t1000000
2019-06-20 15:39:38.217835 RBA 1183
Extract Source Begin End
./dirdat/t1000000 * Initialized * 2019-06-20 15:39
./dirdat/t1000000 * Initialized * First Record
Current directory /u01/app/ogg
Report file /u01/app/ogg/dirrpt/REP1.rpt
Parameter file /u01/app/ogg/dirprm/rep1.prm
Checkpoint file /u01/app/ogg/dirchk/REP1.cpr
Checkpoint table ogg.ggschkpt
Process file /u01/app/ogg/dirpcs/REP1.pcr
Stdout file /u01/app/ogg/dirout/REP1.out
Error log /u01/app/ogg/ggserr.log
--报错了(No database operations have been performed.)
从日志中看到ext1,pump1是正常捕获到的,问题出现在ogg端的rep1进程上
于是修改配置文件--上面rep1的配置中间的很多参数,看上去很复杂,先不加那么多,没参照官方文档,照别人的先加了(报错),不行就先取消掉,后面在研究
GGSCI (ogg) 32> view param rep1
REPLICAT rep1
SETENV (NLS_LANG=AMERICAN_AMERICA.AL32UTF8)
USERID ogg, PASSWORD ogg
HANDLECOLLISIONS
ASSUMETARGETDEFS
DISCARDFILE ./dirrpt/rep1.dsc, PURGE
MAP scott.emp_ogg, TARGET scott.emp_ogg;
MAP scott.dept_ogg, TARGET scott.dept_ogg;
GGSCI (ogg) 14> stats rep1
Sending STATS request to REPLICAT REP1 ...
No active replication maps.
GGSCI (ogg) 15> view report rep1
2019-06-20 15:49:56 INFO OGG-03035 Operating system character set identified as UTF-8. Locale: en_US, LC_ALL:.
REPLICAT rep1
SETENV (NLS_LANG=AMERICAN_AMERICA.AL32UTF8)
Set environment variable (NLS_LANG=AMERICAN_AMERICA.AL32UTF8)
USERID ogg,PASSWORD ***
DISCARDFILE ./dirrpt/rep1.dsc, APPEND, MEGABYTES 1000
MAP scott.emp_ogg, TARGET scott.emp_ogg;
MAP scott.dept_ogg, TARGET scott.dept_ogg;
MAP scott.dept, TARGET scott.dept;
2019-06-20 15:49:56 INFO OGG-01815 Virtual Memory Facilities for: COM
anon alloc: mmap(MAP_ANON) anon free: munmap
file alloc: mmap(MAP_SHARED) file free: munmap
target directories:
/u01/app/ogg/dirtmp.
GGSCI (ogg) 17> stop rep1
GGSCI (ogg) 18> edit param rep1
REPLICAT rep1
SETENV (NLS_LANG=AMERICAN_AMERICA.AL32UTF8)
USERID ogg, PASSWORD ogg
HANDLECOLLISIONS
ASSUMETARGETDEFS
DISCARDFILE ./dirrpt/rep1.dsc, PURGE
MAP scott.emp_ogg, TARGET scott.emp_ogg;
MAP scott.dept_ogg, TARGET scott.dept_ogg;
GGSCI (ogg) 19> delete rep1
GGSCI (ogg) 20> add REPLICAT rep1,exttrail ./dirdat/t1,checkpointtable ogg.ggschkpt
GGSCI (ogg) 21> start rep1
再次查看,进程状态正常, 数据也同步
GGSCI (ogg) 23> stats rep1
Sending STATS request to REPLICAT REP1 ...
Start of Statistics at 2019-06-20 15:56:01.
Replicating from SCOTT.EMP_OGG to SCOTT.EMP_OGG:
*** Total statistics since 2019-06-20 15:55:52 ***
Total inserts 0.00
Total updates 1.00
Total deletes 0.00
Total discards 0.00
Total operations 1.00
*** Daily statistics since 2019-06-20 15:55:52 ***
Total inserts 0.00
Total updates 1.00
Total deletes 0.00
Total discards 0.00
Total operations 1.00
*** Hourly statistics since 2019-06-20 15:55:52 ***
Total inserts 0.00
Total updates 1.00
Total deletes 0.00
Total discards 0.00
Total operations 1.00
*** Latest statistics since 2019-06-20 15:55:52 ***
Total inserts 0.00
Total updates 1.00
Total deletes 0.00
Total discards 0.00
Total operations 1.00
End of Statistics.
开启ogg端的ext1,pump1进程,orcl的rep1进程
GGSCI (DSI) 11> info all
Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING EXT1 00:00:00 00:00:06
EXTRACT RUNNING PUMP1 00:00:00 00:00:09
REPLICAT RUNNING REP1 00:00:00 00:00:04
GGSCI (ogg) 30> info all
Program Status Group Lag at Chkpt Time Since Chkpt
MANAGER RUNNING
EXTRACT RUNNING EXT1 00:00:00 00:00:09
EXTRACT RUNNING PUMP1 00:00:00 00:00:02
REPLICAT RUNNING REP1 00:00:00 00:00:05
ogg端更新测试
SCOTT@ ogg >update emp_ogg set ename=''hq_ogg_1'' where empno=7934;
1 row updated.
SCOTT@ ogg >commit;
Commit complete.
SCOTT@ ogg >select * from emp_ogg where empno=7934;
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7934 hq_ogg_1 CLERK 7782 23-JAN-82 1300 10
1 row selected.
SCOTT@ orcl >select * from emp_ogg where empno=7934;
EMPNO ENAME JOB MGR HIREDATE SAL COMM DEPTNO
---------- ---------- --------- ---------- --------- ---------- ---------- ----------
7934 hq_ogg_1 CLERK 7782 23-JAN-82 1300 10
1 row selected.
单实例双向简单同步测试完成。
[译] 如何对 Angular Controller 进行单元测试
原文地址:http://www.bradoncode.com/blog/2015/05/17/angularjs-testing-controller/
@Bradley Braithwaite
上面一篇文章简单介绍了如何使用 Jasmine 进行JavaScript的单元测试
我们用了一段简单的代码进行计算的测试。
接下来我们将其延伸到我们对Angular Controller的测试中。如果你不太了解angular也没关系,下文也会提及关于Angular的一些知识。
写个简单的Angular App
在开始写测试之前,我们先写一个简单的计算App,它会计算两个数字之和。
代码如下:
<html>
<head>
<script type="text/javascript" src="https://code.angularjs.org/1.4.0-rc.2/angular.min.js"></script>
</head>
<body>
<!-- This div element corresponds to the CalculatorController we created via the JavaScript-->
<div ng-controller="CalculatorController">
<input ng-model="x" type="number">
<input ng-model="y" type="number">
<strong>{{z}}</strong>
<!-- the value for ngClick maps to the sum function within the controller body -->
<input type="button" ng-click="sum()" value="+">
</div>
</body>
<script type="text/javascript">
// Creates a new module called ''calculatorApp''
angular.module(''calculatorApp'', []);
// Registers a controller to our module ''calculatorApp''.
angular.module(''calculatorApp'').controller(''CalculatorController'', function CalculatorController($scope) {
$scope.z = 0;
$scope.sum = function() {
$scope.z = $scope.x + $scope.y;
};
});
// load the app
angular.element(document).ready(function() {
angular.bootstrap(document, [''calculatorApp'']);
});
</script>
</html>
简单说说里面涉及的一些基本概念:
创建一个 module
什么是angular.module?它是用于创建,回收模块的地方
。我们创建一个名为calculatorApp新的模块,我们并将组件添加到这个模块里。
angular.module(''calculatorApp'', []);
关于第二个参数?第二个参数必须的,表明我们正在创造一个新的模块。如果需要我们的应用程序有其他的依赖,我们可以将它们[''ngResource'',''ngCookies'']
传入进去。
第二个参数的存在的表示这是一个请求返回的模块的实例。
从概念上讲,它本意是类似下面的意思:
* angular.module.createInstance(name, requires);
* angular.module.getInstance(name)
然而实际我们是这样写的:
* angular.module(''calculatorApp'', []); // i.e. createInstance
* angular.module(''calculatorApp''); // i.e. getInstance
关于module的更多信息 https://docs.angularjs.org/api/ng/function/angular.module
2.给module添加controller
接着我们给angular module的示例添加一个controller
angular.module(''calculatorApp'').controller(''CalculatorController'', function CalculatorController($scope) {
$scope.z = 0;
$scope.sum = function() {
$scope.z = $scope.x + $scope.y;
};
});
控制器主要负责业务逻辑和视图绑定,$scope
者是视图的控制器直线的信使。
3.连接视图中的元素
在下面 HTML 中,我们需要计算input里面的值,而这些都包含在这个controller的div中。
<div ng-controller="CalculatorController">
<input ng-model="x" type="number">
<input ng-model="y" type="number">
<strong>{{z}}</strong>
<!-- the value for ngClick maps to the sum function within the controller body -->
<input type="button" ng-click="sum()" value="+">
</div>
input 中的ng-model绑定的的值及时$scope上定义的比如$scope.x
,我们还在button元素使用ng-click绑定了$scope.sum
方法。
添加测试
接下来终于到了我们的主题,添加一些单元测试给controller,我们忽略代码中html部分,主要集中在controller的代码中。
angular.module(''calculatorApp'').controller(''CalculatorController'', function CalculatorController($scope) {
$scope.z = 0;
$scope.sum = function() {
$scope.z = $scope.x + $scope.y;
};
});
为了测试 controller,我们还得提及下面几点?
如何创建一个controller实例
如何get/set一个对象的属性
如何调用$scope里面的函数
describe(''calculator'', function () {
beforeEach(angular.mock.module(''calculatorApp''));
var $controller;
beforeEach(angular.mock.inject(function(_$controller_){
$controller = _$controller_;
}));
describe(''sum'', function () {
it(''1 + 1 should equal 2'', function () {
var $scope = {};
var controller = $controller(''CalculatorController'', { $scope: $scope });
$scope.x = 1;
$scope.y = 2;
$scope.sum();
expect($scope.z).toBe(3);
});
});
});
开始前我们需要引入ngMock,我们在测试的代码加入angular.mock
,ngMock模块提供了一种机制进行诸如以及虚拟的service进行单元测试。
如何获取controller的实例
使用ngMock我们可以注册一个calculator app实例。
beforeEach(angular.mock.module(''calculatorApp''));
一旦calculatorApp初始化后,我们可以使用inject
函数,这样可以解决controller的引用问题。
beforeEach(angular.mock.inject(function(_$controller_) {
$controller = _$controller_;
}));
一旦app加载完了,我们使用了inject
函数,$controller service可以获取 CalculatorController 的实例。
var controller = $controller(''CalculatorController'', { $scope: $scope });
如何get/set一个对象的属性
在上篇代码中我们已经可以获取一个controller的实例,在括号的第二个参数实际是controller自己,我们的controller只有一个参数$scope
对象
function CalculatorController($scope) { ... }
在我们的测试中$scope代表的就是一个简单的JavaScript对象。
var $scope = {};
var controller = $controller(''CalculatorController'', { $scope: $scope });
// set some properties on the scope object
$scope.x = 1;
$scope.y = 2;
我们设置x,y的值,模拟刚才的gif中的所展示的一样。我们同意也可以读取对象中的属性,就像下面这段测试的断言:
expect($scope.z).toBe(3);
如何调用$scope里面的函数
最后一件事情就是我们如何模拟用户的点击,就像我们在绝大多数JS中使用的一致,,其实就是简单的调用函数就行,
$scope.sum();
运行效果如下
小结
本篇文章简单的基本的介绍了如何对angular controller进行单元测试,但是这是建立在不停的刷新浏览器基础上,
而这些流畅可以再好,也是我们后面的一篇文章 如何使用karam进行 angular 测试 (翻译中...)的所要说的。
完整代码:https://github.com/JackPu/angular-test-tutorial/blob/master/angular-test.html
【DBA从入门到实践】第七期:如何对 OceanBase 进行 SQL 诊断和调优?
数据库是绝大多数应用系统储存数据的主要工具。当用户系统访问数据库时,需要使用 SQL 把应用的指令告诉数据库。因此 SQL 是应用与数据库系统 “沟通” 的重要手段,SQL 性能的好坏将直接影响 “沟通” 的效率,进一步影响系统的用户响应时间、系统吞吐量、IT 设置成本等。
那么,什么是 SQL 诊断与调优?
SQL 诊断就是通过一些技术手段来找出“沟通”效率不高的原因或潜在影响“沟通”效率的因素,比如发现执行性能不佳的 SQL、可能存在性能瓶颈的 SQL 等。而 SQL 调优则是通过一系列的技术手段,来提高 SQL 的执行效率,解决 SQL 的性能瓶颈,从而达到提高应用与数据库“沟通”效率的目的。
《DBA 从入门到实践》第七期将在5月22日(周三)如期而至,为大家讲解:
- ODP(OceanBase Database Proxy) SQL 路由原理。
- 如何分析 SQL 监控视图。
- 如何阅读和管理 OceanBase SQL 执行计划。
- 最常见的 SQL 调优方式。
- SQL 性能问题的典型场景和排查思路。
扫描下方二维码报名学习
内容抢“鲜”知
(一)ODP 路由原理
路由是 OceanBase 分布式数据库中的一个重要功能,是分布式架构下,实现快速访问数据的利器。
Partition 是 OceanBase 数据存储的基本单元。当我们创建一张 Table 时,就会存在表和 Partition 的映射。非分区表中,不考虑主备时,一张 Table 对应一个 Partition;分区表中一个 Table 会对应多个 Partition。
路由实现了根据 OBServer 的数据分布精准访问到数据所在的机器,还可以根据一定的策略将一致性要求不高的读请求发送给副本机器,充分利用机器的资源。路由选择输入的是用户的 SQL、用户配置规则、OBServer 状态,路由选择输出的是一个可用 OBServer 地址。
其路由实现逻辑如下图所示:
(二)分析 SQL 监控视图
OceanBase 数据库 V4.x 版本中有着非常丰富的视图,通过这些视图可以获取 OceanBase 集群各种数据库对象的基本信息和实时状态信息。这些视图分为两大类:数据字典视图和动态性能视图。
丰富的视图展示了 OceanBase 数据库的内部架构及系统运行的详细状态。通过视图,我们可以便捷地查看 OceanBase 数据库的系统组成及实时状态,了解组件之间的关系,内部视图是学习 OceanBase 数据库的最好途径之一,其相应的数据字典视图见下图。
监控指标相关的数据来源于 OceanBase 数据库内部的动态性能视图,所有监控指标都可以通过 SQL 语句进行访问。动态性能视图分为 GV$ 视图和 V$ 视图,外部监控系统(例如 OCP)通过在每个数据库服务器上部署代理进程,通过 SQL 接口定期拉取本机上的监控信息(V$ 视图),部分全局信息(例如 Root Service 相关)通过中心节点采集。监控数据统一汇报给监控系统数据库,并按照各种维度聚合(集群维度、租户维度、节点维度、Unit 维度),从而构建整个监控大盘。
(三)如何阅读和管理
OceanBase SQL 执行计划执行计划(Execution Plan)是对一条 SQL 查询语句在数据库中执行过程的描述。用户可以通过 EXPLAIN 命令查看优化器针对指定 SQL 生成的逻辑执行计划。如果要分析某条 SQL 的性能问题,通常需要先查看 SQL 的执行计划,排查每一步 SQL 执行是否存在问题。因此,读懂执行计划是 SQL 优化的先决条件,而了解执行计划的算子是理解 EXPLAIN 命令的关键。
(四)最常见的 SQL 调优方式
当用户已经学习完如何通过 EXPLAIN 命令查看优化器针对 SQL 生成的逻辑执行计划,以及如何通过 Hint 和 Outline 来人为控制优化器的行为,使优化器生成指定的计划。就可以以上述内容为基础,继续了解 OceanBase SQL 性能调优中最基础的内容:第一部分是统计信息和计划缓存的介绍,第二部分是 OceanBase 数据库的使用者需要了解的几种性能调优手段。
(五)SQL 性能问题的典型场景和排查思路
当用户完成了如何阅读和管理 SQL 的执行计划,以及常见的几种 SQL 调优方式,就获得了学习这一小节的基础知识。当用户遇到由于 SQL 原因导致的性能问题时,一般可以通过以下几个步骤进行排查:
通过全链路追踪确认各阶段耗时占比,确认耗时长的阶段是什么?
- 如果上一步显示慢在 observer 模块,则可以通过 oceanbase.gv$ob_sql_audit 分析具体是 observer 内的什么阶段耗时长了?
- 如果上一步耗时长的阶段在执行阶段,则先根据上文的内容判断是否存在 buffer 表、大小账号、硬解析等问题?
- 如果上述问题均不存在,则需要通过 explain extended 展示的执行计划来分析优化器的估行和真实行数是否有巨大差距,如果有明显差距,则需要手动收集统计信息。否则就进一步考虑是需要创建更合适的索引、通过 hint 调整计划形态、通过 hint 调整并行度等。
在该小节中,首先会为大家展示上面排查步骤中提到的几个常被用于进行 SQL 性能问题分析的工具,然后介绍如何通过这几个工具找到 SQL 性能优化的方向,最后会对 SQL 调优的最典型的场景和常见问题进行一个汇总。更多精彩内容请锁定5月22日《DBA从入门到实践》第七期,扫描下方海报二维码预约直播吧~
【MySQL】如何对 SQL 语句进行跟踪(trace)?

【MySQL】如何对 SQL 语句进行跟踪(trace)?
MySQL 5.6.3 提供了对 SQL 语句的跟踪功能,通过 trace 文件可以进一步了解优化器是如何选择某个执行计划的,和 Oracle 的 10053 事件类似。使用时需要先打开设置,然后执行一次 SQL,最后查看 INFORMATION_SCHEMA.OPTIMIZER_TRACE 表的内容。需要注意的是,该表为临时表,只能在当前会话进行查询,每次查询返回的都是最近一次执行的 SQL 语句。
设置时相关的参数:
mysql> show variables like ''%trace%'';
+------------------------------+----------------------------------------------------------------------------+
| Variable_name | Value |
+------------------------------+----------------------------------------------------------------------------+
| optimizer_trace | enabled=off,one_line=off |
| optimizer_trace_features | greedy_search=on,range_optimizer=on,dynamic_range=on,repeated_subselect=on |
| optimizer_trace_limit | 1 |
| optimizer_trace_max_mem_size | 16384 |
| optimizer_trace_offset | -1 |
+------------------------------+----------------------------------------------------------------------------+
5 rows in set (0.02 sec)
以下是打开设置的命令:
SET optimizer_trace=''enabled=on''; #打开设置
SET OPTIMIZER_TRACE_MAX_MEM_SIZE=1000000; #最大内存根据实际情况而定, 可以不设置
SET END_MARKERS_IN_JSON=ON; #增加 JSON 格式注释,默认为 OFF
SET optimizer_trace_limit = 1;
MySQL 索引选择不正确并详细解析 OPTIMIZER_TRACE 格式
http://blog.csdn.net/melody_mr/article/details/48950601
一 表结构如下:
CREATE TABLE t_audit_operate_log (
Fid bigint(16) AUTO_INCREMENT,
Fcreate_time int(10) unsigned NOT NULL DEFAULT ''0'',
Fuser varchar(50) DEFAULT '''',
Fip bigint(16) DEFAULT NULL,
Foperate_object_id bigint(20) DEFAULT ''0'',
PRIMARY KEY (Fid),
KEY indx_ctime (Fcreate_time),
KEY indx_user (Fuser),
KEY indx_objid (Foperate_object_id),
KEY indx_ip (Fip)
) ENGINE=InnoDB DEFAULT CHARSET=utf8;
执行查询:
MySQL> explain select count(*) from t_audit_operate_log where Fuser=''XX@XX.com'' and Fcreate_time>=1407081600 and Fcreate_time<=1407427199\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: t_audit_operate_log
type: ref
possible_keys: indx_ctime,indx_user
key: indx_user
key_len: 153
ref: const
rows: 2007326
Extra: Using where
发现,使用了一个不合适的索引,不是很理想,于是改成指定索引:
mysql> explain select count(*) from t_audit_operate_log use index(indx_ctime) where Fuser=''CY6016@cyou-inc.com'' and Fcreate_time>=1407081600 and Fcreate_time<=1407427199\G
*************************** 1. row ***************************
id: 1
select_type: SIMPLE
table: t_audit_operate_log
type: range
possible_keys: indx_ctime
key: indx_ctime
key_len: 5
ref: NULL
rows: 670092
Extra: Using where
实际执行耗时,后者比前者快了接近 10
问题:很奇怪,优化器为何不选择使用 indx_ctime 索引,而选择了明显会扫描更多行的 indx_user 索引。
分析 2 个索引的数据量如下: 两个条件的唯一性对比:
select count(*) from t_audit_operate_log where Fuser=''XX@XX.com'';
+----------+
| count(*) |
+----------+
| 1238382 |
+----------+
select count(*) from t_audit_operate_log where Fcreate_time>=1407254400 and Fcreate_time<=1407427199;
+----------+
| count(*) |
+----------+
| 198920 |
+----------+
显然,使用索引 indx_ctime 好于 indx_user, 但 MySQL 却选择了 indx_user. 为什么?
于是,使用 OPTIMIZER_TRACE 进一步探索.
二 OPTIMIZER_TRACE 的过程说明
以本处事例简要说明 OPTIMIZER_TRACE 的过程.
查看 OPTIMIZER_TRACE 方法:
1.set optimizer_trace=''enabled=on''; --- 开启 trace
2.set optimizer_trace_max_mem_size=1000000; --- 设置 trace 大小
3.set end_markers_in_json=on; --- 增加 trace 中注释
4.select * from information_schema.optimizer_trace\G;
- {\
- "steps": [\
- {\
- "join_preparation": {\ --- 优化准备工作
- "select#": 1,\
- "steps": [\
- {\
- "expanded_query": "/* select#1 */ select count(0) AS `count(*)` from `t_audit_operate_log` where ((`t_audit_operate_log`.`Fuser` = ''XX@XX.com'') and (`t_audit_operate_log`.`Fcreate_time` >= 1407081600) and (`t_audit_operate_log`.`Fcreate_time` <= 1407427199))"\
- }\
- ] /* steps */\
- } /* join_preparation */\
- },\
- {\
- "join_optimization": {\ --- 优化工作的主要阶段,包括逻辑优化和物理优化两个阶段
- "select#": 1,\
- "steps": [\ --- 优化工作的主要阶段, 逻辑优化阶段
- {\
- "condition_processing": {\ --- 逻辑优化,条件化简
- "condition": "WHERE",\
- "original_condition": "((`t_audit_operate_log`.`Fuser` = ''XX@XX.com'') and (`t_audit_operate_log`.`Fcreate_time` >= 1407081600) and (`t_audit_operate_log`.`Fcreate_time` <= 1407427199))",\
- "steps": [\
- {\
- "transformation": "equality_propagation",\ --- 逻辑优化,条件化简,等式处理
- "resulting_condition": "((`t_audit_operate_log`.`Fuser` = ''XX@XX.com'') and (`t_audit_operate_log`.`Fcreate_time` >= 1407081600) and (`t_audit_operate_log`.`Fcreate_time` <= 1407427199))"\
- },\
- {\
- "transformation": "constant_propagation",\ --- 逻辑优化,条件化简,常量处理
- "resulting_condition": "((`t_audit_operate_log`.`Fuser` = ''XX@XX.com'') and (`t_audit_operate_log`.`Fcreate_time` >= 1407081600) and (`t_audit_operate_log`.`Fcreate_time` <= 1407427199))"\
- },\
- {\
- "transformation": "trivial_condition_removal",\ --- 逻辑优化,条件化简,条件去除
- "resulting_condition": "((`t_audit_operate_log`.`Fuser` = ''XX@XX.com'') and (`t_audit_operate_log`.`Fcreate_time` >= 1407081600) and (`t_audit_operate_log`.`Fcreate_time` <= 1407427199))"\
- }\
- ] /* steps */\
- } /* condition_processing */\
- },\ --- 逻辑优化,条件化简,结束
- {\
- "table_dependencies": [\ --- 逻辑优化, 找出表之间的相互依赖关系. 非直接可用的优化方式.
- {\
- "table": "`t_audit_operate_log`",\
- "row_may_be_null": false,\
- "map_bit": 0,\
- "depends_on_map_bits": [\
- ] /* depends_on_map_bits */\
- }\
- ] /* table_dependencies */\
- },\
- {\
- "ref_optimizer_key_uses": [\ --- 逻辑优化, 找出备选的索引
- {\
- "table": "`t_audit_operate_log`",\
- "field": "Fuser",\
- "equals": "''XX@XX.com''",\
- "null_rejecting": false\
- }\
- ] /* ref_optimizer_key_uses */\
- },\
- {\
- "rows_estimation": [\ --- 逻辑优化, 估算每个表的元组个数. 单表上进行全表扫描和索引扫描的代价估算. 每个索引都估算索引扫描代价
- {\
- "table": "`t_audit_operate_log`",\
- "range_analysis": {\
- "table_scan": {\--- 逻辑优化, 估算每个表的元组个数. 单表上进行全表扫描的代价
- "rows": 8150516,\
- "cost": 1.73e6\
- } /* table_scan */,\
- "potential_range_indices": [\ --- 逻辑优化, 列出备选的索引. 后续版本字符串变为 potential_range_indexes
- {\
- "index": "PRIMARY",\--- 逻辑优化, 本行表明主键索引不可用
- "usable": false,\
- "cause": "not_applicable"\
- },\
- {\
- "index": "indx_ctime",\--- 逻辑优化, 索引 indx_ctime
- "usable": true,\
- "key_parts": [\
- "Fcreate_time",\
- "Fid"\
- ] /* key_parts */\
- },\
- {\
- "index": "indx_user",\--- 逻辑优化, 索引 indx_user
- "usable": true,\
- "key_parts": [\
- "Fuser",\
- "Fid"\
- ] /* key_parts */\
- },\
- {\
- "index": "indx_objid",\--- 逻辑优化, 索引
- "usable": false,\
- "cause": "not_applicable"\
- },\
- {\
- "index": "indx_ip",\--- 逻辑优化, 索引
- "usable": false,\
- "cause": "not_applicable"\
- }\
- ] /* potential_range_indices */,\
- "setup_range_conditions": [\ --- 逻辑优化, 如果有可下推的条件,则带条件考虑范围查询
- ] /* setup_range_conditions */,\
- "group_index_range": {\--- 逻辑优化, 如带有 GROUPBY 或 DISTINCT, 则考虑是否有索引可优化这种操作. 并考虑带有 MIN/MAX 的情况
- "chosen": false,\
- "cause": "not_group_by_or_distinct"\
- } /* group_index_range */,\
- "analyzing_range_alternatives": {\--- 逻辑优化,开始计算每个索引做范围扫描的花费 (等值比较是范围扫描的特例)
- "range_scan_alternatives": [\
- {\
- "index": "indx_ctime",\ ---[A]
- "ranges": [\
- "1407081600 <= Fcreate_time <= 1407427199"\
- ] /* ranges */,\
- "index_dives_for_eq_ranges": true,\
- "rowid_ordered": false,\
- "using_mrr": true,\
- "index_only": false,\
- "rows": 688362,\
- "cost": 564553,\ --- 逻辑优化,这个索引的代价最小
- "chosen": true\ --- 逻辑优化,这个索引的代价最小,被选中. (比前面的 table_scan 和其他索引的代价都小)
- },\
- {\
- "index": "indx_user",\
- "ranges": [\
- "XX@XX.com <= Fuser <= XX@XX.com"\
- ] /* ranges */,\
- "index_dives_for_eq_ranges": true,\
- "rowid_ordered": true,\
- "using_mrr": true,\
- "index_only": false,\
- "rows": 1945894,\
- "cost": 1.18e6,\
- "chosen": false,\
- "cause": "cost"\
- }\
- ] /* range_scan_alternatives */,\
- "analyzing_roworder_intersect": {\
- "usable": false,\
- "cause": "too_few_roworder_scans"\
- } /* analyzing_roworder_intersect */\
- } /* analyzing_range_alternatives */,\--- 逻辑优化,开始计算每个索引做范围扫描的花费. 这项工作结算
- "chosen_range_access_summary": {\--- 逻辑优化,开始计算每个索引做范围扫描的花费. 总结本阶段最优的.
- "range_access_plan": {\
- "type": "range_scan",\
- "index": "indx_ctime",\
- "rows": 688362,\
- "ranges": [\
- "1407081600 <= Fcreate_time <= 1407427199"\
- ] /* ranges */\
- } /* range_access_plan */,\
- "rows_for_plan": 688362,\
- "cost_for_plan": 564553,\
- "chosen": true\ -- 这里看到的 cost 和 rows 都比 indx_user 要来的小很多 --- 这个和 [A] 处是一样的,是信息汇总.
- } /* chosen_range_access_summary */\
- } /* range_analysis */\
- }\
- ] /* rows_estimation */\ --- 逻辑优化, 估算每个表的元组个数. 行估算结束
- },\
- {\
- "considered_execution_plans": [\ --- 物理优化, 开始多表连接的物理优化计算
- {\
- "plan_prefix": [\
- ] /* plan_prefix */,\
- "table": "`t_audit_operate_log`",\
- "best_access_path": {\
- "considered_access_paths": [\
- {\
- "access_type": "ref",\ --- 物理优化, 计算 indx_user 索引上使用 ref 方查找的花费,
- "index": "indx_user",\
- "rows": 1.95e6,\
- "cost": 683515,\
- "chosen": true\
- },\ --- 物理优化, 本应该比较所有的可用索引,即打印出多个格式相同的但索引名不同的内容,这里却没有。推测是 bug-- 没有遍历每一个索引.
- {\
- "access_type": "range",\--- 物理优化,猜测对应的是 indx_time(没有实例可进行调试,对比 5.7 的跟踪信息猜测而得)
- "rows": 516272,\
- "cost": 702225,\--- 物理优化,代价大于了 ref 方式的 683515,所以没有被选择
- "chosen": false\ -- cost 比上面看到的增加了很多,但 rows 没什么变化 --- 物理优化,此索引没有被选择
- }\
- ] /* considered_access_paths */\
- } /* best_access_path */,\
- "cost_for_plan": 683515,\ --- 物理优化,汇总在 best_access_path 阶段得到的结果
- "rows_for_plan": 1.95e6,\
- "chosen": true\ -- cost 比上面看到的竟然小了很多?虽然 rows 没啥变化 --- 物理优化,汇总在 best_access_path 阶段得到的结果
- }\
- ] /* considered_execution_plans */\
- },\
- {\
- "attaching_conditions_to_tables": {\--- 逻辑优化,尽量把条件绑定到对应的表上
- } /* attaching_conditions_to_tables */\
- },\
- {\
- "refine_plan": [\
- {\
- "table": "`t_audit_operate_log`",\--- 逻辑优化,下推索引条件 "pushed_index_condition";其他条件附加到表上做为过滤条件 "table_condition_attached"
- }\
- ] /* refine_plan */\
- }\
- ] /* steps */\
- } /* join_optimization */\ \--- 逻辑优化和物理优化结束
- },\
- {\
- "join_explain": {} /* join_explain */\
- }\
- ] /* steps */\
三 其他一个相似问题
单表扫描,使用ref和range从索引获取数据一例
http://blog.163.com/li_hx/blog/static/183991413201461853637715/
四 问题的解决方式
遇到单表上有多个索引的时候,在 MySQL5.6.20 版本之前的版本,需要人工强制使用索引,以达到最好的效果.
注: 原创地址 http://blog.csdn.net/xj626852095/article/details/52767963
我最近遇到线上一个 select 语句,explain 选择的索引是一样的,这个索引是两个字段
比如 select * from t1 where a=''xxx'' and b>=''123123'', 索引是 a_b (a,b)
默认情况 explain 显示的索引访问方式是 ref,而 force index a_b 则使用了 range,range 访问效果实际更好
-- 贴查询执行计划全部内容
| 1 | SIMPLE | subscribe_f8 | ref | PRIMARY,uid | uid | 8 | const | 13494670 | Using where; Using index
force index 之后
| 1 | SIMPLE | subscribe_f8 | range | uid | uid | 12 | NULL | 13494674 | Using where; Using index |
--2 者计划差别不大
就是 type 从 ref 变成 range 了. force 之前 key_length 是 8,force 之后是 12 . 其实应该是 12 才是合理的
-- 版本支持 expalin format=JSON 命令吗?支持则试试,有更详细的代价计算值
--show create table 看看?
发来详细的执行计划,见 执行计划结果一 。
执行计划结果一
select uid_from,create_time from subscribe_f8 where uid=12345678 and create_time > ''2013-09-08 09:54:07.0'' order by create_time asc limit 5000 | { "steps": [ { "join_preparation": { "select#": 1, "steps": [ { "expanded_query": "/* select#1 */ select `subscribe_f8`.`uid_from` AS `uid_from`,`subscribe_f8`.`create_time` AS `create_time` from `subscribe_f8` where ((`subscribe_f8`.`uid` = 12345678) and (`subscribe_f8`.`create_time` > ''2013-09-08 09:54:07.0'')) order by `subscribe_f8`.`create_time` limit 5000" } ] } }, { ...... { "considered_execution_plans": [ { "plan_prefix": [ ], "table": "`subscribe_f8`", "best_access_path": { "considered_access_paths": [ { "access_type": "ref", "index": "PRIMARY", "rows": 1.36e7, "cost": 3.01e6, "chosen": true }, { "access_type": "ref", "index": "uid", "rows": 1.36e7, "cost": 2.77e6, "chosen": true }, { "access_type": "range", "rows": 1.02e7, "cost": 5.46e6, "chosen": false } ] }, "cost_for_plan": 2.77e6, "rows_for_plan": 1.36e7, "chosen": true } ] }, ... }
分析: 这个问题,执行计划指示使用ref效果更好,但实际执行时,指定使用range方式sql执行效率更高一些。
而且,通常情况下,ref的效率比range的效率要高,所以MySQL优先使用ref方式(这是一条启发式规则)。
但究竟是否使用ref或range,MySQL还需要通过代价估算进行比较再做决定。
代价估算是一个求近似值的过程,因为计算基于的一些值是估算得来的,并不十分精准,这就造成了计算误差。
但是,如果索引的选择率较低(如低于10%),则使用ref的效果好于range的效果的概率大。反过来说,如果索引的选择率较高,则ref未必range的效果好,但是因计算误差,使得执行计划得到了ref好于range的错误结论。
进一步讲,如果索引的选择率很高(如远高于10%,这是大概值,不精确),甚至数据存放是顺序连续的,有可能的是,尽管索引存在,但索引扫描的效果还差与全表扫描。
其他说明:尽管这个事例中的SQL使用了LIMIT子句,但其对ref和range方式的计算和比较,不构成影响。
进一步了解情况:
--这个查询,能得到多少行元组? 占全表的所有元组的百分比是多少?
去掉limit后,符合那个时间段的记录数占那个uid的88%,占全表记录数的的40%
进一步分析: 从更详细的查询执行计划看,查询执行计划结果一,显示了ref的cost是''2.77e6'', 而range的cost是’5.46e6‘,这说明优化器理所当然地认为ref比range好。
可是,鉴于实际上索引选择率太高,使得使用索引已经没有意义(但优化器不知道这一信息),所以实际上使用’force index (uid) ‘会得到更好的执行效果。
这就是这个想象的答案。
深入代码分析: 在best_access_path()函数中,比较了各种路径的代价。所以是使用ref还是range甚至full table scan,在这个函数中有计算和比较。
摘录代码中部分注释如下,能表明一些含义。
/*
Don''t test table scan if it can''t be better.
Prefer key lookup if we would use the same key for scanning.
Don''t do a table scan on InnoDB tables, if we can read the used
parts of the row from any of the used index.
This is because table scans uses index and we would not win
anything by using a table scan. The only exception is INDEX_MERGE
quick select. We can not say for sure that INDEX_MERGE quick select
is always faster than ref access. So it''s necessary to check if
ref access is more expensive.
We do not consider index/table scan or range access if:
1a) The best ''ref'' access produces fewer records than a table scan
(or index scan, or range acces), and
1b) The best ''ref'' executed for all partial row combinations, is
cheaper than a single scan. The rationale for comparing
COST(ref_per_partial_row) * E(#partial_rows)
vs
COST(single_scan)
is that if join buffering is used for the scan, then scan will
not be performed E(#partial_rows) times, but
E(#partial_rows)/E(#partial_rows_fit_in_buffer). At this point
in best_access_path() we don''t know this ratio, but it is
somewhere between 1 and E(#partial_rows). To avoid
overestimating the total cost of scanning, the heuristic used
here has to assume that the ratio is 1. A more fine-grained
cost comparison will be done later in this function.
(2) This doesn''t hold: the best way to perform table scan is to to perform
''range'' access using index IDX, and the best way to perform ''ref''
access is to use the same index IDX, with the same or more key parts.
(note: it is not clear how this rule is/should be extended to
index_merge quick selects)
(3) See above note about InnoDB.
(4) NOT ("FORCE INDEX(...)" is used for table and there is ''ref'' access
path, but there is no quick select)
If the condition in the above brackets holds, then the only possible
"table scan" access method is ALL/index (there is no quick select).
Since we have a ''ref'' access path, and FORCE INDEX instructs us to
choose it over ALL/index, there is no need to consider a full table
scan.
*/
About Me
.............................................................................................................................................
● 本文作者:小麦苗,部分内容整理自网络,若有侵权请联系小麦苗删除
● 本文在 itpub(http://blog.itpub.net/26736162/abstract/1/)、博客园(http://www.cnblogs.com/lhrbest)和个人微信公众号(xiaomaimiaolhr)上有同步更新
● 本文 itpub 地址:http://blog.itpub.net/26736162/abstract/1/
● 本文博客园地址:http://www.cnblogs.com/lhrbest
● 本文 pdf 版、个人简介及小麦苗云盘地址:http://blog.itpub.net/26736162/viewspace-1624453/
● 数据库笔试面试题库及解答:http://blog.itpub.net/26736162/viewspace-2134706/
● DBA 宝典今日头条号地址:http://www.toutiao.com/c/user/6401772890/#mid=1564638659405826
.............................................................................................................................................
● QQ 群号:230161599(满)、618766405
● 微信群:可加我微信,我拉大家进群,非诚勿扰
● 联系我请加 QQ 好友(646634621),注明添加缘由
● 于 2017-12-01 09:00 ~ 2017-12-31 22:00 在魔都完成
● 文章内容来源于小麦苗的学习笔记,部分整理自网络,若有侵权或不当之处还请谅解
● 版权所有,欢迎分享本文,转载请保留出处
.............................................................................................................................................
● 小麦苗的微店:https://weidian.com/s/793741433?wfr=c&ifr=shopdetail
● 小麦苗出版的数据库类丛书:http://blog.itpub.net/26736162/viewspace-2142121/
.............................................................................................................................................
使用微信客户端扫描下面的二维码来关注小麦苗的微信公众号(xiaomaimiaolhr)及 QQ 群(DBA 宝典),学习最实用的数据库技术。
小麦苗的微信公众号 小麦苗的 DBA 宝典 QQ 群 2 《DBA 笔试面宝典》读者群 小麦苗的微店
.............................................................................................................................................
![]()
![]()
来自 “ITPUB 博客” ,链接:http://blog.itpub.net/26736162/viewspace-2149385/,如需转载,请注明出处,否则将追究法律责任。
今天关于如何对 Oracle 中不包括字符串值的列中的所有值求和?和oracle 不包含字符串的介绍到此结束,谢谢您的阅读,有关oracle ogg 单实例双向复制搭建(oracle-oracle)--Oracle GoldenGate、[译] 如何对 Angular Controller 进行单元测试、【DBA从入门到实践】第七期:如何对 OceanBase 进行 SQL 诊断和调优?、【MySQL】如何对 SQL 语句进行跟踪(trace)?等更多相关知识的信息可以在本站进行查询。
本文标签: