关于如何使用pyodbc将表从MSAccess迁移到Postgres?和将表导入access数据库的问题就给大家分享到这里,感谢你花时间阅读本站内容,更多关于AccessPostgreSQLwithP
关于如何使用 pyodbc 将表从 MS Access 迁移到 Postgres?和将表导入access数据库的问题就给大家分享到这里,感谢你花时间阅读本站内容,更多关于Access PostgreSQL with Python、linux – 从postgresql 9.3迁移到9.4,postgis、ms-access – 从MS Access 2007保存到MS SQL服务器的ODBC连接的密码、Oracle 数据库迁移到 PostgreSQL/EDB 初探等相关知识的信息别忘了在本站进行查找喔。
本文目录一览:- 如何使用 pyodbc 将表从 MS Access 迁移到 Postgres?(将表导入access数据库)
- Access PostgreSQL with Python
- linux – 从postgresql 9.3迁移到9.4,postgis
- ms-access – 从MS Access 2007保存到MS SQL服务器的ODBC连接的密码
- Oracle 数据库迁移到 PostgreSQL/EDB 初探
如何使用 pyodbc 将表从 MS Access 迁移到 Postgres?(将表导入access数据库)
我最终使用了 const MyForm = ({ subjects }) => {
// removed initial values for subjects
// you could set these,but they aren't required
const [state,setState] = useState({
name: "",email: "",message: "",emailSent: false
});
// no changes to this function
const onInputChange = (e) => {
const item = e.target.name;
const value =
e.target.type === "checkbox" ? e.target.checked : e.target.value;
setState({
...state,[item]: value
});
};
return (
<form>
<input
id="name"
type="text"
value={state.name}
onChange={onInputChange}
name="name"
required
/>
{/* ... other inputs ... */}
{subjects.map((subject,i) => (
<div className="w-full sm:w-auto" key={i}>
<label className="flex content-center p-3">
<input
type="checkbox"
name={subject.name}
value={subject.name}
onChange={onInputChange}
checked={!!state[subject.name]}
/>
<span className="ml-3">{subject.label}</span>
</label>
</div>
))}
</form>
);
};
和 pyodbc
的组合。 pywin32
“基本上是一个非常薄的 Python 包装器,它允许我们与 COM 对象交互并使用 Python 自动化 Windows 应用程序”(引自下面的第二个链接)。
我能够以编程方式与 Access 交互,并使用 pywin32
https://docs.microsoft.com/en-us/office/vba/api/access.docmd.transferdatabase https://pbpython.com/windows-com.html
DoCmd.TransferDatabase
这种方法似乎对我有用。我喜欢表/字段的创建以及数据的插入都是自动处理的(这是我在使用 import win32com.client
import pyodbc
import logging
from pathlib import Path
conn_str = (r'DRIVER={Microsoft Access Driver (*.mdb,*.accdb)}; 'rf'DBQ={access_database_location};')
conn = pyodbc.connect(conn_str)
cursor = conn.cursor()
a = win32com.client.Dispatch("Access.Application")
a.OpenCurrentDatabase(access_database_location)
table_list = []
for table_info in cursor.tables(tableType='TABLE'):
table_list.append(table_info.table_name)
for table in table_list:
logging.info(f"Exporting: {table}")
acExport = 1
acTable = 0
db_name = Path(access_database_location).stem.lower()
a.DoCmd.TransferDatabase(acExport,"ODBC Database","ODBC;DRIVER={PostgreSQL Unicode};"f"DATABASE={db_name};"f"UID={pg_user};"f"PWD={pg_pwd};""SERVER=localhost;"f"PORT={pg_port};",acTable,f"{table}",f"{table.lower()}_export_from_access")
logging.info(f"Finished Export of Table: {table}")
logging.info("Creating empty table in EGDB based off of this")
时遇到的原始问题)。
如果有人有更好的方法,我愿意接受建议。
Access PostgreSQL with Python
http://wiki.postgresql.org/wiki/Psycopg2_Tutorial There are any number of programming languages available for you to use with PostgreSQL. One could argue that PostgreSQL as an Open Source database has one of the largest libraries of Applic
http://wiki.postgresql.org/wiki/psycopg2_tutorial
There are any number of programming languages available for you to use with PostgreSQL. One could argue that PostgreSQL as an Open Source database has one of the largest libraries of Application Programmable Interfaces (API) available for various languages.
One such language is Python and it happens to be one of my favored languages. I use it for almost all hacking that I do. Why? Well to be honest it is because I am not that great of a programmer. I am a database administrator and operating system consultant by trade. Python ensures that the code that I write is readable by other more talented programmers 6 months from when I stopped working on it.
Nine times out of ten, when I am using Python, I am using the language to communicate with a PostgreSQL database. My driver of choice when doing so is called Psycopg. Recently Psycopg2 has been under heavy development and is currently in Beta 4. It is said that this will be the last Beta. Like the first release of Pyscopg the driver is designed to be lightweight, fast.
The following article discusses how to connect to PostgreSQL with Psycopg2 and also illustrates some of the nice features that come with the driver. The test platform for this article is Psycopg2, Python 2.4, and PostgreSQL 8.1dev.
Psycopg2 is a DB API 2.0 compliant PostgreSQL driver that is actively developed. It is designed for multi-threaded applications and manages its own connection pool. Other interesting features of the adapter are that if you are using the PostgreSQL array data type, Psycopg will automatically convert a result using that data type to a Python list.
The following discusses specific use of Psycopg. It does not try to implement a lot of Object Orientated goodness but to provide clear and concise syntactical examples of uses the driver with PostgreSQL. Making the initial connection:
#!/usr/bin/python2.4 # # Small script to show PostgreSQL and Pyscopg together # import psycopg2 try: conn = psycopg2.connect("dbname=''template1'' user=''dbuser'' host=''localhost'' password=''dbpass''") except: print "I am unable to connect to the database"
The above will import the adapter and try to connect to the database. If the connection fails a print statement will occur to STDOUT. You could also use the exception to try the connection again with different parameters if you like.
The next step is to define a cursor to work with. It is important to note that Python/Psycopg cursors are not cursors as defined by PostgreSQL. They are completely different beasts.
cur = conn.cursor()
Now that we have the cursor defined we can execute a query.
cur.execute("""SELECT datname from pg_database""")
When you have executed your query you need to have a list [variable?] to put your results in.
rows = cur.fetchall()
Now all the results from our query are within the variable named rows. Using this variable you can start processing the results. To print the screen you could do the following.
print "\nShow me the databases:\n" for row in rows: print " ", row[0]
Everything we just covered should work with any database that Python can
Execution within a transaction is a very good thing, it ensures data integrity and allows for appropriate error handling. However there are queries that can not be run from within a transaction. Take the following example.
#/usr/bin/python2.4 # # import psycopg2 # Try to connect try: conn=psycopg2.connect("dbname=''template1'' user=''dbuser'' password=''mypass''") except: print "I am unable to connect to the database." cur = conn.cursor() try: cur.execute("""DROP DATABASE foo_test""") except: print "I can''t drop our test database!"
This code would actually fail with the printed message of "I can''t drop our test database!" PostgreSQL can not drop databases within a transaction, it is an all or nothing command. If you want to drop the database you would need to change the isolation level of the database this is done using the following.
conn.set_isolation_level(0)
You would place the above immediately preceding the DROP DATABASE cursor execution.
The psycopg2 adapter also has the ability to deal with some of the special data types that PostgreSQL has available. One such example is arrays. Let''s review the table below:
Table "public.bar" Column | Type | Modifiers --------+--------+----------------------------------------------------- id | bigint | not null default nextval(''public.bar_id_seq''::text) notes | text[] | Indexes: "bar_pkey" PRIMARY KEY, btree (id)
The notes column in the bar table is of type text[]. The [] has special meaning in PostgreSQL. The [] represents that the type is not just text but an array of text. To insert values into this table you would use a statement like the following.
foo=# insert into bar(notes) values (''{An array of text, Another array of text}'');
Which when selected from the table would have the following representation.
foo=# select * from bar; id | notes ----+---------------------------------------------- 2 | {"An array of text","Another array of text"} (1 row)
Some languages and database drivers would insist that you manually create a routine to parse the above array output. Psycopg2 does not force you to do that. Instead it converts the array into a Python list.
#/usr/bin/python2.4 # # import psycopg2 # Try to connect try: conn=psycopg2.connect("dbname=''foo'' user=''dbuser'' password=''mypass''") except: print "I am unable to connect to the database." cur = conn.cursor() try: cur.execute("""SELECT * from bar""") except: print "I can''t SELECT from bar" rows = cur.fetchall() print "\nRows: \n" for row in rows: print " ", row[1]
When the script was executed the following output would be presented.
[jd@jd ~]$ python test.py Rows: [''An array of text'', ''Another array of text'']
You could then access the list in Python with something similar to the following.
#/usr/bin/python2.4 # # import psycopg2 # Try to connect try: conn=psycopg2.connect("dbname=''foo'' user=''dbuser'' password=''mypass''") except: print "I am unable to connect to the database." cur = conn.cursor() try: cur.execute("""SELECT * from bar""") except: print "I can''t SELECT from bar" rows = cur.fetchall() for row in rows: print " ", row[1][1]
The above would output the following.
Rows: Another array of text
Some programmers would prefer to not use the numeric representation of the column. For example row[1][1], instead it can be easier to use a dictionary. Using the example with slight modification.
#/usr/bin/python2.4 # # # load the adapter import psycopg2 # load the psycopg extras module import psycopg2.extras # Try to connect try: conn=psycopg2.connect("dbname=''foo'' user=''dbuser'' password=''mypass''") except: print "I am unable to connect to the database." # If we are accessing the rows via column name instead of position we # need to add the arguments to conn.cursor. cur = conn.cursor(cursor_factory=psycopg2.extras.DictCursor) try: cur.execute("""SELECT * from bar""") except: print "I can''t SELECT from bar" # # Note that below we are accessing the row via the column name. rows = cur.fetchall() for row in rows: print " ", row[''notes''][1]
The above would output the following.
Rows: Another array of text
Notice that we did not use row[1] but instead used row[''notes''] which signifies the notes column within the bar table.
A last item I would like to show you is how to insert multiple rows using a dictionary. If you had the following:
namedict = ({"first_name":"Joshua", "last_name":"Drake"}, {"first_name":"Steven", "last_name":"Foo"}, {"first_name":"David", "last_name":"Bar"})
You could easily insert all three rows within the dictionary by using:
cur = conn.cursor() cur.executemany("""INSERT INTO bar(first_name,last_name) VALUES (%(first_name)s, %(last_name)s)""", namedict)
The cur.executemany statement will automatically iterate through the dictionary and execute the INSERT query for each row.
The only downside that I run into with Pyscopg2 and PostgreSQL is it is a little behind in terms of server side support functions like server side prepared queries but it is said that the author is expecting to implement these features in the near future.
linux – 从postgresql 9.3迁移到9.4,postgis
我从postgresql 9.3升级到9.4,现在我想迁移我的数据.
所以这就是我试图做的.首先我像这样运行旧的postgresql
/opt/pgsql-9.3/bin/pg_ctl -D /var/lib/postgres/data/ start
然后,我尝试将旧数据库转储到文件中:
/opt/pgsql-9.3/bin/pg_dumpall >> old_backup.sql
它告诉我:
pg_dump: [archiver (db)] query Failed: ERROR: Could not access file "$libdir/postgis-2.1": No such file or directory
好吧,我试图找到postgis-2.1文件并将它们复制到libdir
find / -name "*postgis-2.1*"
/usr/lib/postgresql/rtpostgis-2.1.so
/usr/lib/postgresql/postgis-2.1.so
/usr/lib/postgresql/postgis-2.1 <-----
好的,现在什么是libdir?
/opt/pgsql-9.3/bin/pg_config --pkglibdir
/opt/pgsql-9.3/lib
所以我在/opt/pgsql-9.3/lib中创建了一个符号链接到这里/usr/lib/postgresql/postgis-2.1:
pwd
/opt/pgsql-9.3/lib
ls -l postgis-2.1
postgis-2.1 -> /usr/share/postgresql/contrib/postgis-2.1
但我仍然得到错误:查询失败:错误:无法访问文件“$libdir / postgis-2.1”:没有这样的文件或目录
我有点想法.也许有人可以帮助我?
我是usnig arch linux
P.S
Postgis已安装:
pacman -S postgis
warning: postgis-2.1.5-1 is up to date -- reinstalling
以下是二进制文件:
find / -name "*postgis-2.1*"
/usr/lib/postgresql/rtpostgis-2.1.so <---- binary
/usr/lib/postgresql/postgis-2.1.so <----- binary
/opt/pgsql-9.3/lib/postgis-2.1 <----- that's the symlink from earlier
/usr/share/postgresql/contrib/postgis-2.1
$rm /opt/pgsql-9.3/lib/postgis-2.1 # it is a wrong link,so undo it
$ln -s /usr/lib/postgresql/postgis-2.1.so /opt/pgsql-9.3/lib/
现在PostGIS .so文件将在“$libdir”中,您将能够执行pg_dumpall.
当然保持这种方式不会让我理智,但是作为你的升级,我假设这只是一个中间状态,你将完全删除Postgresql 9.3.您还必须验证PostGIS是否与9.3库链接,否则您可能会遇到一些问题.
ms-access – 从MS Access 2007保存到MS SQL服务器的ODBC连接的密码
因此,现在移动数据,创建sql服务器用户(仅用于访问该特定数据库),并通过ODBC连接将表链接到Access数据库.但是,应该以某种方式解决一个麻烦:Access在打开Access数据库时会定期询问用户密码.
服务器PC和客户端PC上的用户都登录到本地计算机,即他们的用户未在独立域服务器上进行验证.
我看到有几种方法可以解决这个问题:
> 1)配置集成安全模型,以便用户可以登录,由Windows登录自动授权(即使用“可信连接”).鉴于服务器PC无法从客户端PC识别用户,我不确定如何做到这一点.如果我现在尝试这样做,我会收到用户从不受信任的域连接的错误.
> 2)在客户端存储sql Server用户密码.不过,我不确定这是可能的.我知道在一些配置文件中保留密码,或者在应用程序配置中存储模糊处理应该被视为降低安全性,但这对于给定的设置是可接受的.
> 3)或许还有其他一些方法如何将sql服务器表链接到Access?
解决方法
如果这不合适,这里有一个可能的替代技巧,利用Access在程序关闭之前记住所有打开的连接的事实:
>复制其中一个表的连接字符串
>创建一个passthru查询“ptqConnect”并在其中输入任何快速sql语句,如SELECT 1
>将连接字符串粘贴到PTQ Connect属性中,并确保添加PWD = something;在里面.
>在您的应用程序的启动过程中,请确保您调用该PTQ.像DCount(“*”,“ptqConnect”)这样的东西都可以.
而已.由于Access会关闭打开的连接,直到您关闭它,即使关闭数据库,您的其他表现在也可以毫不费力地打开,即使链接表中没有存储密码连接字符串.
如果您不想公开包含PWD的连接字符串,您也可以从VBA启动连接并通过提供MDE或仅保护代码的密码来隐藏代码.
您可以找到此行为的解释here.
Oracle 数据库迁移到 PostgreSQL/EDB 初探
由于某些非技术方面的原因,我们也在搞一些开源数据库引入,替换商业数据库,于是瞄上了 PostgreSQL。
PostgreSQL 本身的技术不在这里做介绍,虽然国内 PostgreSQL 没有 Mysql 那么流行,但是搜索一下,还是能够找到不少 PostgreSQL 中文的一些技术资料, 其中 PG 的官方文档是最详尽且最有助于学习 PG 的:
最新的 9.4 版本的 online doc:http://www.postgresql.org/docs/9.4/static/index.html
山东瀚高科技提供的中文翻译文档:http://www.highgo.com.cn/docs/docs90cn/
Oracle 迁移到 PostgreSQL:
比较流行的迁移工具 ora2pg: http://ora2pg.darold.net/config.html#testing
大致原理:
该工具使用 perl 写的, 主要迁移功能都由 ora2pg.conf 这个参数文件控制,该参数文件定义源库 Oracle 和目标库的 DSN,通过 DBD::Oracle Perl 、 DBD::Pg Perl 的 module 连接到源库和目标库, 按照用户制定的参数,导出源库 Oracle 的相关对象结构(也可以连数据一起导),导入到目标库 PG。 导入可以是在线的(导出脚本不落地直接导入 PG),也可以先生成导出脚本到本地,手动修改复核后,再手动使用 psql 导入 PG。
ora2pg 命令行各参数使用也比较清晰明了, 支持各种导出导入选项,比如你可以只导某些 schema 或者对象,或者排除某些 schema 或者对象不导, 也可以什么都不做,只对源库做一个迁移 PG 的分析,出具一份报告等等。
具体使用方法,还是自己去实际尝试会比较好。
使用 Ora2pg 的方案将 Oracle 迁移至 PG, 遇到的问题多少与源 Oracle 数据库有多少与 PG 不兼容的东西成正比。下面是我们遇到的问题简单总结:
应用程序里面 sqlmap.xml 人工 REVIEW 时的问题发现:
Oracle | PostgreSQL |
dual 表 | 没有 dual 表,可以直接 select 1、select user、select xxx |
时间函数(sysdate) | current_date current_time current_timestamp |
trunc | trunc/date trunc(时间截断函数) |
sys_guid() | 有类似 sys_guid 的函数 uuid_generate_v4,但需要安装,create extension "uuid-oosp". |
nvl | PG 可以使用 COALESCE |
where rownum < ... | select row_number () over () , * from XXXX 或者 limit |
PG 导入 ora2pg 产生的迁移脚本时发现的问题:
tables | DEFAULT Sys_guid(); session# serial# 字段 |
DEFAULT uuid_generate_v4; 去掉# |
partitions | 父表在 tables 中已创建,创建子表时,由于大小写问题提示找不到父表。 | 表名称区分大小写,继承父表时,大小写改为与父表相同。增加或减少分区要修改触发器函数。 |
synonyms | 转换过来的语句类似如下: CREATE VIEW public.tablename AS SELECT * FROM owner.tablename SECURITY DEFINER; |
PG 中没有同义词,自动创建为视图,转换过来的视图名称与存在的表名相同,需要修改视图名称。 SECURITY DEFINER 不能加到创建语句的后面,可通过授权来控制权限。 |
packages | 1、PG 中没有 v$session.(oracle 原来的 packages 里面大多有这个) select sid, serial# into v_sid, v_serial# from v$session 2、有些转换过来的语句顺序不正确,需要重构。 3、packages 自动转换为 function, 且 function 会创建到以原来 packages 命名的 schema 下 |
ora2pg 会把 Oracle 里面的 package header 转换为同名的 schema |
procedures | 1、dbms_output.put_line ('' 成功插入数据!'');-->RAISE NOTICE ''%'',('' 成功插入数据!''); 2、 dbms_output.put_line(sqlerrm) --- RAISE NOTICE ''%'', sqlerrm; 3、表的子查询必须包围在圆括弧里并且必须赋予一个别名 4、start with connect by 递归查询在 PG 中 WITH RECURSIVE |
|
views | 1、字符类型问题 2、递归查询没有转换成功 3、外连接中的 “+” 号没有转换 4、DECODE 函数需要重构 5、COALESCE 函数返回值类型不匹配 |
1、字符类型需要调整。 2、start with connect by 递归查询在 PG 中 WITH RECURSIVE 3、 (+) 这样的外连接写法需要调整为 SQL 标准的 table1 [LEFT|RIGHT|FULL] OUTER JOIN table2 ON (...); 4、DECODE 函数需要重构成 (case when some_column = ''some_value'' then''some_other_value'' when ... then ... else''some_default_value'' end) as some_column; 5、COALESCE 函数返回值类型不匹配、需要类型转换。 |
等等。
可以看到直接将 Oracle 迁移到 PostgreSQL,无论是应用层,还是数据库层面, 都还是有很多改写工作要做的。
由于有上述诸多问题,我们研究了一下号称为替换 Oracle 而生的 EnterpriseDB(PostgreSQL 的商业版):
http://www.enterprisedb.com/products-services-training/products/documentation/enterpriseedition
EnterpriseDB 的安装指引:
http://www.enterprisedb.com/docs/en/9.4/instguide/toc.html
EnterpriseDB 的官方文档:
http://www.enterprisedb.com/docs/en/9.4/eeguide/toc.html
EnterpriseDB 的 Oracle 迁移 PG 的工具 Migration Toolkit 使用指引:
http://www.enterprisedb.com/docs/en/9.4/migrate/toc.html
工具使用比较简单,简单配置一些参数接口, 迁移过程发现的问题相比 ora2pg 少了很多,但还是有一些:
例如 EDB 中没有 function sys_connect_by_path 函数、 oracle trigger 语法里面的 referencing new as new old as old 等。
结论:
虽然 EDB 的 Migration Toolkit 这个迁移工具本身没有像 ora2pg 那么多灵活的迁移选项组合, 但是其功能基本满足迁移需求,而且重要的是由于 EDB 在 PostgreSQL 的基础上,外面包了一层,实现了很多 Oracle 的概念,例如前面提到的 dual 表、同义词、pkg、内建函数,甚至 Oracle 的分区表语法可以直接在 EDB 运行。所以,同样的源 Oracle 数据库,我们发现 90% 左右的都可以直接在目标 EnterpriseDB 运行了,只需要修改一少部分。
具体 EnterpriseDB 的兼容 Oracle 的清单,参考
http://www.enterprisedb.com/products-services-training/products/postgres-plus-advanced-server?quicktabs_advanceservertab=3#quicktabs-advanceservertab
所以,从节省成本与运维稳定两方面平衡考虑, 对于新建系统直接采用 PostgreSQL,对于已有系统迁移到 PostgreSQL,使用 EnterpriseDB 也不失为一个过渡期的较好选择。
附录:
ora2pg --help
Usage: ora2pg [-dhpqv --estimate_cost --dump_as_html] [--option value]
-a | --allow str : coma separated list of objects to allow from export.
Can be used with SHOW_COLUMN too.
-b | --basedir dir: Used to set the default output directory, where files
resulting from exports will be stored.
-c | --conf file : Used to set an alternate configuration file than the
default /etc/ora2pg/ora2pg.conf.
-d | --debug : Enable verbose output.
-e | --exclude str: coma separated list of objects to exclude from export.
Can be used with SHOW_COLUMN too.
-h | --help : Print this short help.
-i | --input file : File containing Oracle PL/SQL code to convert with
no Oracle database connection initiated.
-j | --jobs num : number of parallel process to send data to PostgreSQL.
-J | --copies num : number of parallel connection to extract data from Oracle.
-l | --log file : Used to set a log file. Default is stdout.
-L | --limit num : number of tuples extracted from Oracle and stored in
memory before writing, default: 10000.
-n | --namespace schema : Used to set the Oracle schema to extract from.
-o | --out file : Used to set the path to the output file where SQL will
be written. Default: output.sql in running directory.
-p | --plsql : Enable PLSQL to PLPSQL code conversion.
-P | --parallel num: Number of parallel tables to extract at the same time.
-q | --quiet : disable progress bar.
-s | --source DSN : Allow to set the Oracle DBI datasource.
-t | --type export: Used to set the export type. It will override the one
given in the configuration file (TYPE).
-u | --user name : Used to set the Oracle database connection user.
-v | --version : Show Ora2Pg Version and exit.
-w | --password pwd : Used to set the password of the Oracle database user.
--forceowner: if set to 1 force ora2pg to set tables and sequences owner
like in Oracle database. If the value is set to a username this
one will be used as the objects owner. By default it''s the user
used to connect to the Pg database that will be the owner.
--nls_lang code: use this to set the Oracle NLS_LANG client encoding.
--client_encoding code: Use this to set the PostgreSQL client encoding.
--view_as_table str: coma separated list of view to export as table.
--estimate_cost : activate the migration cost evalution with SHOW_REPORT
--cost_unit_value minutes: number of minutes for a cost evalution unit.
default: 5 minutes, correspond to a migration conducted by a
PostgreSQL expert. Set it to 10 if this is your first migration.
--dump_as_html : force ora2pg to dump report in HTML, used only with
SHOW_REPORT. Default is to dump report as simple text.
--init_project NAME: initialise a typical ora2pg project tree. Top directory
will be created under project base dir.
--project_base DIR : define the base dir for ora2pg project trees. Default
is current directory.
See full documentation at http://ora2pg.darold.net/ for more help or see manpage with ''man ora2pg''.
EDB 的 Migration Toolkit:
runMTK.sh -help
Running EnterpriseDB Migration Toolkit (Build 48.0.1) ...
EnterpriseDB Migration Toolkit (Build 48.0.1)
Usage: runMTK [-options] SCHEMA
If no option is specified, the complete schema will be imported.
where options include:
-helpDisplay the application command-line usage.
-versionDisplay the application version information.
-verbose [on|off] Display application log messages on standard output (default: on).
-schemaOnlyImport the schema object definitions only.
-dataOnlyImport the table data only. When -tables is in place, it imports data only for the selected tables. Note: If there are any FK constraints defined on target tables, use -truncLoad option along with this option.
-sourcedbtype db_type The -sourcedbtype option specifies the source database type. db_type may be one of the following values: mysql, oracle, sqlserver, sybase, postgresql, enterprisedb. db_type is case-insensitive. By default, db_type is oracle.
-targetdbtype db_type The -targetdbtype option specifies the target database type. db_type may be one of the following values: oracle, sqlserver, postgresql, enterprisedb. db_type is case-insensitive. By default, db_type is enterprisedb.
-allTablesImport all tables.
-tables LISTImport comma-separated list of tables.
-constraintsImport the table constraints.
-indexesImport the table indexes.
-triggersImport the table triggers.
-allViewsImport all Views.
-views LISTImport comma-separated list of Views.
-allProcsImport all stored procedures.
-procs LISTImport comma-separated list of stored procedures.
-allFuncsImport all functions.
-funcs LISTImport comma-separated list of functions.
-allPackagesImport all packages.
-packages LIST Import comma-separated list of packages.
-allSequencesImport all sequences.
-sequences LIST Import comma-separated list of sequences.
-targetSchema NAME Name of the target schema (default: target schema is named after source schema).
-allDBLinksImport all Database Links.
-allSynonymsIt enables the migration of all public and private synonyms from an Oracle database to an Advanced Server database. If a synonym with the same name already exists in the target database, the existing synonym will be replaced with the migrated version.
-allPublicSynonymsIt enables the migration of all public synonyms from an Oracle database to an Advanced Server database. If a synonym with the same name already exists in the target database, the existing synonym will be replaced with the migrated version.
-allPrivateSynonymsIt enables the migration of all private synonyms from an Oracle database to an Advanced Server database. If a synonym with the same name already exists in the target database, the existing synonym will be replaced with the migrated version.
-dropSchema [true|false] Drop the schema if it already exists in the target database (default: false).
-truncLoadIt disables any constraints on target table and truncates the data from the table before importing new data. This option can only be used with -dataOnly.
-safeModeTransfer data in safe mode using plain SQL statements.
-copyDelimiterSpecify a single character to be used as delimiter in copy command when loading table data. Default is \t
-batchSizeSpecify the Batch Size to be used by the bulk inserts. Valid values are 1-1000, default batch size is 1000, reduce if you run into Out of Memory exception
-cpBatchSize Specify the Batch Size in MB, to be used in the Copy Command. Valid value is > 0, default batch size is 8 MB
-fetchSize Specify fetch size in terms of number of rows should be fetched in result set at a time. This option can be used when tables contain millions of rows and you want to avoid out of memory errors.
-filterPropThe properties file that contains table where clause.
-skipFKConstSkip migration of FK constraints.
-skipCKConstSkip migration of Check constraints.
-ignoreCheckConstFilterBy default MTK does not migrate Check constraints and Default clauses from Sybase, use this option to turn off this filter.
-fastCopyBypass WAL logging to perform the COPY operation in an optimized way, default disabled.
-customColTypeMapping LISTUse custom type mapping represented by a semi-colon separated list, where each entry is specified using COL_NAME_REG_EXPR=TYPE pair. e.g. .*ID=INTEGER
-customColTypeMappingFile PROP_FILEThe custom type mapping represented by a properties file, where each entry is specified using COL_NAME_REG_EXPR=TYPE pair. e.g. .*ID=INTEGER
-offlineMigration [PATH] This performs offline migration and saves the DDL/DML scripts in files for a later execution. By default the script files will be saved under user home folder, if required follow -offlineMigration option with a custom path.
-logDir LOG_PATH Specify a custom path to save the log file. By default, on Linux the logs will be saved under folder $HOME/.enterprisedb/migration-toolkit/logs. In case of Windows logs will be saved under folder %HOMEDRIVE%%HOMEPATH%\.enterprisedb\migration-toolkit\logs.
-copyViaDBLinkOra This option can be used to copy data using dblink_ora COPY commad. This option can only be used in Oracle to EnterpriseDB migration mode.
-singleDataFileUse single SQL file for offline data storage for all tables. This option cannot be used in COPY format.
-allUsers Import all users and roles from the source database.
-users LIST Import the selected users/roles from the source database. LIST is a comma-separated list of user/role names e.g. -users MTK,SAMPLE
-allRules Import all rules from the source database.
-rules LIST Import the selected rules from the source database. LIST is a comma-separated list of rule names e.g. -rules high_sal_emp,low_sal_emp
-allGroups Import all groups from the source database.
-groups LIST Import the selected groups from the source database. LIST is a comma-separated list of group names e.g. -groups acct_emp,mkt_emp
-allDomains Import all domain, enumeration and composite types from the source database.
-domains LIST Import the selected domain, enumeration and composite types from the source database. LIST is a comma-separated list of domain names e.g. -domains d_email,d_dob, mood
-objecttypesImport the user-defined object types.
-replaceNullChar <CHAR> If null character is part of a column value, the data migration fails over JDBC protocol. This option can be used to replace null character with a user-specified character.
-importPartitionAsTable [LIST] Use this option to import Oracle Partitioned table as a normal table in EnterpriseDB. To apply the rule on a selected set of tables, follow the option by a comma-separated list of table names.
-enableConstBeforeDataLoad Use this option to re-enable constraints (and triggers) before data load. This is useful in the scenario when the migrated table is mapped to a partition table in EnterpriseDB.
-checkFunctionBodies [true|false] When set to false, it disables validation of the function body during function creation, this is to avoid errors if function contains forward references. Applicable when target database is Postgres/EnterpriseDB, default is true.
-retryCount VALUESpecify the number of re-attempts performed by MTK to migrate objects that failed due to cross-schema dependencies. The VALUE parameter should be greater than 0, default is 2.
-analyze It invokes ANALYZE operation against a target Postgres or Postgres Plus Advanced Server database. The ANALYZE collects statistics for the migrated tables that are utilized for efficient query plans.
-vacuumAnalyze It invokes VACUUM and ANALYZE operations against a target Postgres or Postgres Plus Advanced Server database. The VACUUM reclaims dead tuple storage whereas ANALYZE collects statistics for the migrated tables that are utilized for efficient query plans.
-loaderCount VALUESpecify the number of jobs (threads) to perform data load in parallel. The VALUE parameter should be greater than 0, default is 1.
-logFileSize VALUEIt represents the maximum file size limit (in MB) before rotating to a new log file, defaults to 50MB.
-logFileCount VALUEIt represents the number of files to maintain in log file rotation history, defaults to 20. Specify a value of zero to disable log file rotation.
-useOraCaseIt preserves the identifier case while migrating from Oracle, except for functions, procedures and packages unless identifier names are given in quotes.
-logBadSQLIt saves the DDL scripts for the objects that fail to migrate, in a .sql file in log folder.
-targetDBVersionIt represents the major.minor version of the target database. This option is applicable for offline migration mode and is used to validate certain migration options as per target db version [default is 9.4 for EnterpriseDB database].
Database Connection Information:
The application will read the connectivity information for the source and target database servers from toolkit.properties file.
Refer to MTK readme document for more information.
今天关于如何使用 pyodbc 将表从 MS Access 迁移到 Postgres?和将表导入access数据库的分享就到这里,希望大家有所收获,若想了解更多关于Access PostgreSQL with Python、linux – 从postgresql 9.3迁移到9.4,postgis、ms-access – 从MS Access 2007保存到MS SQL服务器的ODBC连接的密码、Oracle 数据库迁移到 PostgreSQL/EDB 初探等相关知识,可以在本站进行查询。
本文标签: