xiaobaoqiu Blog

Think More, Code Less

MySQL配置文件

MySQL配置文件my.cnf解析:

1
http://www.blogjava.net/hunter129/archive/2009/01/18/251744.html

Numberical Algorithm

数论算法

1. x*y

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
/**
 * x乘以y
 * 限制:y >= 0
 *
 * 原理:         | (x*(y>>1))<<1      y偶数
 *      x * y = | x + (x*(y>>1))<<1  y奇数
 *              | 0                  y == 0
 * @param x
 * @param y
 * @return
 */
private static int doMuliply(int x, int y) {
    if (y == 0) return 0;

    int z = doMuliply(x, y >> 1);
    return NumberUtils.isEven(y) ? z << 1 : x + (z << 1);
}

2. x/y

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
/**
 * x除以y
 * 限制:x > 0 ,y > 0
 *
 *              | ArithmeticException                     y == 0
 * 原理          | (0, 0)                                  x == 0
 *      x / y = | (q'<<1, r'<<1) 其中(q',r')=(x>>1)/y      x偶数
 *              | (q'<<1, r'<<1 + 1) 其中(q',r')=(x>>1)/y  x奇数
 * @param x
 * @param y
 * @return : 商,余
 */
public static Pair<Integer, Integer> doDivide(int x, int y) {
    if (y == 0) throw new ArithmeticException();

    if (x == 0) return Pair.build(0, 0);

    Pair<Integer, Integer> half = doDivide(x >> 1, y);
    int left = half.getLeft() << 1;
    int right = half.getRight() << 1;
    if (NumberUtils.isOdd(x)) right += 1;

    return (right < y) ? Pair.build(left, right) : Pair.build(left+1, right-y);
}

3. pow(x, y) % N

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
/**
 * pow(x, y) % N
 * 限制:x > 0, y > 0, N > 1
 *
 * 原理:                  | (pow(x, y>>1) << 1) % N       y是偶数
 *        pow(x, y) % N =|
 *                       | x * (pow(x, y>>1) << 1) % N   y是奇数
 * @param x
 * @param y
 * @param N
 * @return
 */
public static int powerMod(int x, int y, int N){
    if(y == 0) return 1;

    int half = powerMod(x, y>>1, N);

    return NumberUtils.isEven(y) ? (half * half) % N : (x *half * half) % N;
}

4. 欧几里德GCD算法

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
/**
 * GCD(x, y)
 * 限制:x > y > 0
 *
 * 原理:
 *       GCD(x, y) = GCD(y, x%y)
 *
 * @param x
 * @param y
 * @return
 */
public static int gcd(int x, int y){
    if(y == 0) return x;
    return gcd(y, x % y);
}

5. 素数测试

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
/**
 * 素数测试
 * 限制:x > 0, K > 0
 *
 * 原理:
 * 费马定理:如果p是素数,则对任意的 1 <= a < x,有 pow(a, x-1) = 1 (mod x)
 * 这是素数的必要条件,但不是充分条件。
 * 随机选k个a,分别用费马定理去测试。
 *
 * @param x
 * @return
 */
public static boolean isPrime(int x, int K) {
    for (int i = 0; i < K; i++) {
        int a = (int)(Math.random() * x);
        if(powerMod(a, x-1, x) != 1){
            return false;
        }
    }
    return true;
}

总结: 《Algorithm》这本书我的理解,关注更多的是工业级别(或者说实用级别)的算法,这一点和算法导论不同,算法导论可以说是学院派,这个是工程师派。 比如上面的素数测试,这本书利用多次费马定理,从而降低误判概率(理论上还是存在误判的情况),但是正像作者说的,很多伟大的算法,都是很大概率正确的算法(而不是完全正确)。

PostgreSQL

1.什么是PostgreSQL

The World’s most advanced open source database.

PostgreSQL is a powerful, open source object-relational database system.

主页:http://www.postgresql.org/

支持的系统:Linux, UNIX (AIX, BSD, HP-UX, SGI IRIX, Mac OS X, Solaris, Tru64), and Windows.

功能:has full support for foreign keys, joins, views, triggers, and stored procedures.

数据类型:It includes most SQL:2008 data types, including INTEGER, NUMERIC, BOOLEAN, CHAR, VARCHAR, DATE, INTERVAL, and TIMESTAMP. It also supports storage of binary large objects, including pictures, sounds, or video.此外,用户可以创建自定义数据类型,通常通过PostgreSQL的GiST机制,它们也能被很好得索引,比如PostGIS地理信息系统的数据类型.

语言支持:It has native programming interfaces for C/C++, Java, .Net, Perl, Python, Ruby, Tcl, ODBC, among others, and exceptional documentation.

用户手册:

1
2
http://www.postgresql.org/docs/manuals/
http://www.highgo.com.cn/docs/docs90cn/index.html

大小限制:

Maximum Database Size Unlimited

Maximum Table Size 32 TB

Maximum Row Size 1.6 TB

Maximum Field Size 1 GB

Maximum Rows per Table Unlimited

Maximum Columns per Table 250 - 1600 depending on column types

Maximum Indexes per Table Unlimited

扩展库:

(1)地理数据对象:PostGIS GPL

(2)全文检索:通过Tsearch2或OpenFTS

人物:老何

2.PostgreSQL历史

参见wiki:

1
http://zh.wikipedia.org/wiki/PostgreSQL

3.PostgreSQL VS MySQL

MySQL:Slashdot、Twitter、Facebook、Wikipedia

PostgreSQL:Yahoo、Disqus

4.PostgreSQL安装

1. 命令行安装(Ubuntu为例子)

(1)安装PostgreSQL客户端
1
sudo apt-get install postgresql-client
(2)安装PostgreSQL服务器
1
sudo apt-get install postgresql

正常情况下,安装完成后,PostgreSQL服务器会自动在本机的5432端口开启。

(3)安装图形管理界面:
1
sudo apt-get install pgadmin3
(4)安装PostGis:
1
sudo apt-get install postgresql-9.1-postgis

本机安装的时候,postgresql-9.1-postgis最终的postgis版本为1.5

2. 可视化安装

(1)安装包:
1
http://www.enterprisedb.com/products-services-training/pgdownload
(2)安装PostgreSQL

下载完成之后,直接安装(Ubuntu为例):

1
xiaobaoqiu@xiaobaoqiu:~/Download$ sudo ./postgresql-9.3.4-3-linux-x64.run
(3)安装PostGis

在安装快结束的时候要将StackBuilder那个勾打上,可以用来下载安装扩展工具包。Finish之后进入StackBuiler选择本地的PostgreSQL,点next,之后在展开Spatial Extensions下勾选PostGIS,则会下载安装PostGis。

(4)设置环境变量
1
/etc/profile 加上 /opt/PostgreSQL/9.3/bin

5.PostgreSQL简单使用

1. 查看版本
1
2
xiaobaoqiu@xiaobaoqiu:/var/run/postgresql$ psql --version
psql (PostgreSQL) 9.3.4

初次安装后,默认生成一个名为postgres的数据库和一个名为postgres的数据库用户。这里需要注意的是,同时还生成了一个名为postgres的Linux系统用户。

2. 登录 psql -h host -U user -d database_name -p port

PostgreSQL默认端口为5432,-d参数指定数据库名称:

1
2
3
4
xiaobaoqiu@xiaobaoqiu:/var/run/postgresql$ psql -h localhost -U postgres
Password for user postgres: 
psql.bin (9.3.4)
Type "help" for help.
3. 退出
1
\q
4. 所有用户
1
2
3
4
5
postgis_template=# \du
                             List of roles
 Role name |                   Attributes                   | Member of 
-----------+------------------------------------------------+-----------
 postgres  | Superuser, Create role, Create DB, Replication | {}
4. 连接信息
1
2
postgres=# \conninfo
You are connected to database "postgres" as user "postgres" on host "localhost" at port "5432".
4. show databases
1
2
3
4
5
hotel=# \l
                                      List of databases
        Name         |  Owner   | Encoding |  Collate   |   Ctype    |   Access privileges   
---------------------+----------+----------+------------+------------+-----------------------
 fb_stat             | postgres | UTF8     | en_US.utf8 | en_US.utf8 | 
5. switch database
1
2
postgres=# \c postgis_template
You are now connected to database "postgis_template" as user "postgres".
6. show tables

\d 显示所有表

1
2
3
4
5
6
7
8
9
10
11
postgis_template=# \d
                 List of relations
  Schema  |       Name        |   Type   |  Owner   
----------+-------------------+----------+----------
 public   | geography_columns | view     | postgres
 public   | geometry_columns  | view     | postgres
 public   | spatial_ref_sys   | table    | postgres
 topology | layer             | table    | postgres
 topology | topology          | table    | postgres
 topology | topology_id_seq   | sequence | postgres
(6 rows)

\d table_name

1
2
3
4
5
6
7
8
9
10
11
postgis_template=# \d geography_columns
     View "public.geography_columns"
       Column       |  Type   | Modifiers 
--------------------+---------+-----------
 f_table_catalog    | name    | 
 f_table_schema     | name    | 
 f_table_name       | name    | 
 f_geography_column | name    | 
 coord_dimension    | integer | 
 srid               | integer | 
 type               | text    |
7. 数据库操作

基本的数据库操作,就是使用一般的SQL语言:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
# 创建表 
CREATE TABLE user_tbl(name VARCHAR(20), signup_date DATE);

# 插入数据 
INSERT INTO user_tbl(name, signup_date) VALUES('张三', '2013-12-22');

# 选择记录 
SELECT * FROM user_tbl;

# 更新数据 
UPDATE user_tbl set name = '李四' WHERE name = '张三';

# 删除记录 
DELETE FROM user_tbl WHERE name = '李四' ;

# 添加列
ALTER TABLE user_tbl ADD email VARCHAR(40);

# 更新表结构 
ALTER TABLE user_tbl ALTER COLUMN signup_date SET NOT NULL;

# 更名列 
ALTER TABLE user_tbl RENAME COLUMN signup_date TO signup;

# 删除列 
ALTER TABLE user_tbl DROP COLUMN email;

# 表格更名 
ALTER TABLE user_tbl RENAME TO backup_tbl;

# 删除表格 
DROP TABLE IF EXISTS backup_tbl;
8. PostgreSQL Client支持的一些命令(直接在shell运行):
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
clusterdb -- cluster a PostgreSQL database
createdb -- create a new PostgreSQL database
createlang -- install a PostgreSQL procedural language
createuser -- define a new PostgreSQL user account
dropdb -- remove a PostgreSQL database
droplang -- remove a PostgreSQL procedural language
dropuser -- remove a PostgreSQL user account
ecpg -- embedded SQL C preprocessor
pg_basebackup -- take a base backup of a PostgreSQL cluster
pg_config -- retrieve information about the installed version of PostgreSQL
pg_dump --  extract a PostgreSQL database into a script file or other archive file
pg_dumpall -- extract a PostgreSQL database cluster into a script file
pg_receivexlog -- streams transaction logs from a PostgreSQL cluster
pg_restore --  restore a PostgreSQL database from an archive file created by pg_dump
psql --  PostgreSQL interactive terminal
reindexdb -- reindex a PostgreSQL database
vacuumdb -- garbage-collect and analyze a PostgreSQL database

6.Client

1.JDBC Driver
1
http://jdbc.postgresql.org/
2.maven package:
1
2
3
4
5
<!--postgresql-->
<dependency>
    <groupId>postgresql</groupId>
    <artifactId>postgresql</artifactId>
</dependency>

7.PostGis

PostGIS是一个重要的开源GIS基础软件,因为目前它是为数不多的开源空间数据库存储方案之一,它为PostgreSQL提供了存储空间地理数据的支持,使PostgreSQL成为了一个空间数据库,能够进行空间数据管理、数量测量与几何拓扑分析。

支持空间对象模型:点、折线、多边形、多点、多折线、多多边形与几何集合。

空间分析功能:如求交、求并、求差、缓冲区等;

空间索引:利用R-tree与GiST实现了空间索引,大大加快了正交查询的速度;

存储:Well-known text与well-known binary,前者是一种用文本表示空间对象的注记方法,后者是一种用二进制流表示空间对象的存储方法。PostGIS使用中两种格式在数据库中存储空间对象。

手册:

1
2
http://postgis.net/docs/manual-2.0/
http://workshops.boundlessgeo.com/postgis-intro/

Geography:latitude/longitude,球坐标

常用4326坐标系统,4326坐标系统指的是WGS 84(World Geodetic System,世界坐标系)标准提供地球的球体参照面,是全球定位系统(Global Positioning System,简称 GPS)使用的空间参照系。WGS 84的坐标原点是地球的质心,精度可达到 ±1 米。WGS 84坐标单位是度,其中,第一个坐标是经度,范围是 -180 到 180;第二个坐标是纬度,范围是 -90 到 90。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
Los Angeles: POINT(-118.4079 33.9434)
Paris: POINT(2.3490 48.8533)
#距离
SELECT ST_Distance(
  ST_GeometryFromText('POINT(-118.4079 33.9434)', 4326), -- Los Angeles (LAX)
  ST_GeometryFromText('POINT(2.5559 49.0083)', 4326)     -- Paris (CDG)
  );
121.898285970107度
#距离米
SELECT ST_Distance(
  ST_GeographyFromText('POINT(-118.4079 33.9434)'), -- Los Angeles (LAX)
  ST_GeographyFromText('POINT(2.5559 49.0083)')     -- Paris (CDG)
  );
9124665.26917268米

Geometry :几何坐标

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
ST_AsText

#点
ST_X ST_Y
#线
ST_Length 线长度
ST_StartPoint 起点
ST_EndPoint 终点
ST_NPoints 点数
#多边形
ST_Area(geometry)面积
ST_NRings(geometry)环个数
ST_ExteriorRing(geometry) 外接环
ST_InteriorRingN(geometry,n) 内接环
ST_Perimeter(geometry) 所有环的长度
#集合
MultiPoint, a collection of points
MultiLineString, a collection of linestrings
MultiPolygon, a collection of polygons
GeometryCollection, a heterogeneous collection of any geometry (including other collections)

实用case: gpoint表示坐标:

1
 gpoint       | geography(Point,4326)       |

距离坐标(117.28243891065 39.096105298613)最近的10个地标数据(在1km以内搜索):

1
2
3
4
SELECT id, name, city_code, st_astext(gpoint) FROM poi
WHERE ST_DWithin(GeomFromText('POINT(117.28243891065 39.096105298613)',4326), gpoint, 1000)
ORDER BY ST_Distance(GeomFromText('POINT(117.28243891065 39.096105298613)',4326), gpoint)
LIMIT 10;

Mysql Binlog

1.什么是binlog

MySQL有几个不同的日志文件,包括错误日志、慢查询日志、二进制日志等,可以帮助你找出mysqld内部发生的事情,比如慢日志经常用于发现我们系统中存在的慢查询,便于我们优化sql语句性能。

日志存放目录(/var/log/mysql/,目录位置在my.cnf文件中配置):

1
2
3
4
5
6
7
xiaobaoqiu@xiaobaoqiu:~$ ls /var/log/mysql/
error.log       error.log.6.gz    mysql-bin.000283  mysql-bin.000289  mysql-bin.000295  mysql-slow.log.1.gz  mysql-slow.log.7.gz
error.log.1.gz  error.log.7.gz    mysql-bin.000284  mysql-bin.000290  mysql-bin.000296  mysql-slow.log.2.gz
error.log.2.gz  mysql-bin.000279  mysql-bin.000285  mysql-bin.000291  mysql-bin.000297  mysql-slow.log.3.gz
error.log.3.gz  mysql-bin.000280  mysql-bin.000286  mysql-bin.000292  mysql-bin.000298  mysql-slow.log.4.gz
error.log.4.gz  mysql-bin.000281  mysql-bin.000287  mysql-bin.000293  mysql-bin.index   mysql-slow.log.5.gz
error.log.5.gz  mysql-bin.000282  mysql-bin.000288  mysql-bin.000294  mysql-slow.log    mysql-slow.log.6.gz

二进制日志(binlog)是最重要的一类文件,它是用于记录数据库的修改的二进制文件,改变数据库的SQL语句执行结束时,将在binlog的末尾写入一条记录,同时通知语句解析器,语句执行完毕.

2.如何开启binlog

在mysql配置文件/etc/mysql/my.cnf文件中配置,打开配置项:

1
log_bin                 = /var/log/mysql/mysql-bin.log

重启mysql服务后生效:

1
sudo service mysql restart;

查看binlog是否打开:

1
2
3
4
5
6
7
mysql> show variables like 'log_bin'; 
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| log_bin       | ON    |
+---------------+-------+
1 row in set (0.00 sec)

binlog文件的另外几个参数,后面会讲到:

1
2
expire_logs_days        = 10
max_binlog_size         = 100M

3.binlog文件种类

binlog文件有实际包含两类文件,一个索引文件和一系列日志文件,如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
xiaobaoqiu@xiaobaoqiu:~$ ls /var/log/mysql/ | grep bin
mysql-bin.000279
mysql-bin.000280
mysql-bin.000281
mysql-bin.000282
mysql-bin.000283
mysql-bin.000284
mysql-bin.000285
mysql-bin.000286
mysql-bin.000287
mysql-bin.000288
mysql-bin.000289
mysql-bin.000290
mysql-bin.000291
mysql-bin.000292
mysql-bin.000293
mysql-bin.000294
mysql-bin.000295
mysql-bin.000296
mysql-bin.000297
mysql-bin.000298
mysql-bin.index

索引文件跟踪所有的binlog日志文件,控制日志文件的创建和命名,索引文件的每一行都包含一个完整的日志文件。要查看所有的日志文件,可以使用命令show binary logs,它正是读取我们的索引文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
mysql> show binary logs;
+------------------+-----------+
| Log_name         | File_size |
+------------------+-----------+
| mysql-bin.000279 |       126 |
| mysql-bin.000280 |       150 |
| mysql-bin.000281 |       126 |
| mysql-bin.000282 |       150 |
| mysql-bin.000283 |       126 |
| mysql-bin.000284 |       150 |
| mysql-bin.000285 |       126 |
| mysql-bin.000286 |       126 |
| mysql-bin.000287 |       150 |
| mysql-bin.000288 |       126 |
| mysql-bin.000289 |       150 |
| mysql-bin.000290 |       126 |
| mysql-bin.000291 |       150 |
| mysql-bin.000292 |       126 |
| mysql-bin.000293 |       150 |
| mysql-bin.000294 |       126 |
| mysql-bin.000295 |       150 |
| mysql-bin.000296 |       126 |
| mysql-bin.000297 |       150 |
| mysql-bin.000298 |       935 |
+------------------+-----------+
20 rows in set (0.00 sec)

这个命令会列出所有的binlog文件及其每个文件的大小。

4.binlog文件关键参数

(1). max_binlog_size

binlog文件的最大大小由变量max_binlog_size控制,可以在my.cnf中配置:

1
max_binlog_size         = 100M

如果超过这个值会重新创建一个新的binlog文件。另外,mysql服务重启、日志被刷新(flush logs命令)也会创建一个新的binlog文件。 我们可以修改这个参数:

1
2
3
4
5
6
7
8
9
10
mysql> set global max_binlog_size=1024*1024;
Query OK, 0 rows affected (0.00 sec)

mysql> show variables like 'max_binlog_size'; 
+-----------------+---------+
| Variable_name   | Value   |
+-----------------+---------+
| max_binlog_size | 1048576 |
+-----------------+---------+
1 row in set (0.00 sec)

(2). expire_logs_days 通过binlog参数expire_logs_days来实现mysql自动删除binlog,其单位是天,在mysql中可以看到默认设置是10:

1
2
3
4
5
6
7
mysql> show variables like 'expire_logs_days'; 
+------------------+-------+
| Variable_name    | Value |
+------------------+-------+
| expire_logs_days | 10    |
+------------------+-------+
1 row in set (0.00 sec)

配置在my.cnf文件中:

1
expire_logs_days        = 10

我们也可以改变这个参数(必须管理员权限):

1
2
3
4
5
6
7
8
9
10
mysql> set global expire_logs_days=3;
Query OK, 0 rows affected (0.00 sec)

mysql> show variables like 'expire_logs_days'; 
+------------------+-------+
| Variable_name    | Value |
+------------------+-------+
| expire_logs_days | 3     |
+------------------+-------+
1 row in set (0.00 sec)

5.binlog结构

binlog文件是二进制文件,直接打开是没意义的。可以使用mysqlbinlog工具查看:

1
xiaobaoqiu@xiaobaoqiu:/var/log/mysql$ mysqlbinlog  mysql-bin.000298 > ~/mysql-bin.000298.text

文件如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
/*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=1*/;
/*!40019 SET @@session.max_insert_delayed_threads=0*/;
/*!50003 SET @OLD_COMPLETION_TYPE=@@COMPLETION_TYPE,COMPLETION_TYPE=0*/;
DELIMITER /*!*/;
# at 4
#140524 14:15:56 server id 1  end_log_pos 107   Start: binlog v 4, server v 5.5.34-0ubuntu0.13.04.1-log created 140524 14:15:56
BINLOG '
HDmAUw8BAAAAZwAAAGsAAAAAAAQANS41LjM0LTB1YnVudHUwLjEzLjA0LjEtbG9nAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAEzgNAAgAEgAEBAQEEgAAVAAEGggAAAAICAgCAA==
'/*!*/;
# at 107 
#140524 15:10:44 server id 1  end_log_pos 218   Query   thread_id=38    exec_time=0     error_code=0
use `test`/*!*/;
SET TIMESTAMP=1400915444/*!*/;
SET @@session.pseudo_thread_id=38/*!*/;
SET @@session.foreign_key_checks=1, @@session.sql_auto_is_null=0, @@session.unique_checks=1, @@session.autocommit=1/*!*/;
SET @@session.sql_mode=0/*!*/;
SET @@session.auto_increment_increment=1, @@session.auto_increment_offset=1/*!*/;
/*!\C utf8 *//*!*/;
SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/;
SET @@session.lc_time_names=0/*!*/;
SET @@session.collation_database=DEFAULT/*!*/;
DROP TABLE `order_log` /* generated by server */
/*!*/;
...
# at 461
#140524 15:11:07 server id 1  end_log_pos 576   Query   thread_id=38    exec_time=0     error_code=0
SET TIMESTAMP=1400915467/*!*/;
DROP TABLE `qbooking_role` /* generated by server */
/*!*/;
# at 576
#140524 15:11:17 server id 1  end_log_pos 700   Query   thread_id=38    exec_time=0     error_code=0
SET TIMESTAMP=1400915477/*!*/;
DROP TABLE `qbooking_role_resource` /* generated by server */
/*!*/;
# at 700
#140524 15:11:22 server id 1  end_log_pos 815   Query   thread_id=38    exec_time=0     error_code=0
SET TIMESTAMP=1400915482/*!*/;
DROP TABLE `qbooking_user` /* generated by server */
/*!*/;

我们以其中的一个片段来解析:

1
2
3
4
5
# at 576
#140524 15:11:17 server id 1  end_log_pos 700   Query   thread_id=38    exec_time=0     error_code=0
SET TIMESTAMP=1400915477/*!*/;
DROP TABLE `qbooking_role_resource` /* generated by server */
/*!*/;

简单解析如下:

1
2
3
4
5
6
7
8
9
10
11
# at 576 : 说明这个事件的起点,从576个字节开始的;
#140524 15:11:17 : 表明事件发生的时间戳为2014-05-24 15:11:17;
server id 1 : 事件发生的服务器的标识id;
end_log_pos 700 : 事件的终点,下一个事件从第700个字节开始;
Query : 事件类型,mysql中的insert、update、delete都是属于query;
thread_id=38 : 执行事件的线程
exec_time=0 : 执行命令花费的时间
error_code=0 : 执行命令的错误码
SET TIMESTAMP=1400915477/*!*/; : 命令的时间戳
DROP TABLE `qbooking_role_resource` /* generated by server */ : 命令
/*!*/; : 结束

mysqlbinlog工具的输出,能够被mysql重新执行那些事件的sql语句(可以作为mysql的输入),这在服务器crash之后做数据的修复是非常有用的。 比如我们重新执行mysql-bin.000298这个binlog文件的操作:

1
xiaobaoqiu@xiaobaoqiu:/var/log/mysql$ mysqlbinlog mysql-bin.000298 | mysql -u root -p

6.binlog格式

binlog的格式有三种:STATEMENT,ROW,MIXED,分别对应了mysql的三种复制技术:基于SQL语句的复制(statement-based replication, SBR);基于行的复制(row-based replication, RBR);混合模式复制(mixed-based replication, MBR);这三种格式的区别如下:

  1. STATEMENT是基于sql语句级别的binlog,每一条修改数据的sql都会被保存到binlog里;

  2. ROW是基于行级别的,会记录每一行记录的变化,就是将每一行的修改都记录到binlog里面,记录的非常详细,但sql语句并没有在binlog里,但是有个致命的缺点就是日志量比较大(比如当执行update语句后面不加where条件的时候或alter table的时候);

  3. MIXED:MySQL会根据执行的每一条具体的sql语句来区分对待记录的日志形式,也就是在STATEMENT和ROW之间选择一种。新版本中的STATEMENT Level还是和以前一样,仅仅记录执行的语句。而新版本的MySQL中对ROW level模式也被做了优化,并不是所有的修改都会以ROW level来记录,像遇到表结构变更的时候就会以STATEMENT模式来记录,如果sql语句确实就是update或者delete等修改数据的语句,那么还是会记录所有行的变更。

默认情况写是STATEMENT格式,我们可以查看当前的binlog文件格式:

1
2
3
4
5
6
7
mysql> show variables like 'binlog_format'; 
+---------------+-----------+
| Variable_name | Value     |
+---------------+-----------+
| binlog_format | STATEMENT |
+---------------+-----------+
1 row in set (0.00 sec)

7.登录到mysql查看binlog

(1).只查看第一个binlog文件的内容:

1
2
3
4
5
6
7
8
mysql> show binlog events;
+------------------+-----+-------------+-----------+-------------+--------------------------------------------------------+
| Log_name         | Pos | Event_type  | Server_id | End_log_pos | Info                                                   |
+------------------+-----+-------------+-----------+-------------+--------------------------------------------------------+
| mysql-bin.000282 |   4 | Format_desc |         1 |         107 | Server ver: 5.5.34-0ubuntu0.13.04.1-log, Binlog ver: 4 |
| mysql-bin.000282 | 107 | Rotate      |         1 |         150 | mysql-bin.000283;pos=4                                 |
+------------------+-----+-------------+-----------+-------------+--------------------------------------------------------+
2 rows in set (0.01 sec)

(2).查看当前正在写入的binlog文件:

1
2
3
4
5
6
7
mysql> show master status;
+------------------+----------+--------------+------------------+
| File             | Position | Binlog_Do_DB | Binlog_Ignore_DB |
+------------------+----------+--------------+------------------+
| mysql-bin.000302 |      316 |              |                  |
+------------------+----------+--------------+------------------+
1 row in set (0.00 sec)

(3).查看指定binlog文件的内容:

1
2
3
4
5
6
7
8
9
10
mysql> show binlog events in 'mysql-bin.000302';
+------------------+-----+-------------+-----------+-------------+-----------------------------------------------------------------+
| Log_name         | Pos | Event_type  | Server_id | End_log_pos | Info                                                            |
+------------------+-----+-------------+-----------+-------------+-----------------------------------------------------------------+
| mysql-bin.000302 |   4 | Format_desc |         1 |         107 | Server ver: 5.5.34-0ubuntu0.13.04.1-log, Binlog ver: 4          |
| mysql-bin.000302 | 107 | Query       |         1 |         175 | BEGIN                                                           |
| mysql-bin.000302 | 175 | Query       |         1 |         289 | use `test`; insert into test_table(`name`) values('xiaobaoqiu') |
| mysql-bin.000302 | 289 | Xid         |         1 |         316 | COMMIT /* xid=122 */                                            |
+------------------+-----+-------------+-----------+-------------+-----------------------------------------------------------------+
4 rows in set (0.00 sec)

7.mysqlbinlog工具

查看binlog文件的另外一种方式是mysqlbinlog工具,最基本的使用:

1
xiaobaoqiu@xiaobaoqiu:/var/log/mysql$ mysqlbinlog mysql-bin.000302

官方文档比较权威:

1
http://dev.mysql.com/doc/refman/5.1/zh/client-side-scripts.html#mysqlbinlog

通常情况,可以使用mysqlbinlog直接读取二进制日志文件并将它们用于本地MySQL服务器。也可以使用–read-from-remote-server选项从远程服务器读取二进制日志。当读取远程二进制日志时,可以通过连接参数选项来指示如何连接服务器,但它们通常被忽略掉,除非你还指定了–read-from-remote-server选项。这些选项是–host、–password、–port、–protocol、–socket和–user。

mysqlbinlog参数(mysqlbinlog –help得到):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
Usage: mysqlbinlog [options] log-files
  -?, --help          显示帮助文档
  --base64-output[=name]  
                      Determine when the output statements should be
                      base64-encoded BINLOG statements: 'never' disables it and
                      works only for binlogs without row-based events;
                      'decode-rows' decodes row events into commented SQL
                      statements if the --verbose option is also given; 'auto'
                      prints base64 only when necessary (i.e., for row-based
                      events and format description events); 'always' prints
                      base64 whenever possible. 'always' is deprecated, will be
                      removed in a future version, and should not be used in a
                      production system.  --base64-output with no 'name'
                      argument is equivalent to --base64-output=always and is
                      also deprecated.  If no --base64-output[=name] option is
                      given at all, the default is 'auto'.
  --character-sets-dir=name 
                      Directory for character set files.
  -d, --database=name 指定数据库,只显示这个数据库的数据(只用于本地).
  --debug-check       退出时候检查内存和已打开文件的使用.
  --debug-info        退出的时候打印debug信息
  --default-auth=name Default authentication client-side plugin to use.
  -D, --disable-log-bin 
                      Disable binary log. This is useful, if you enabled
                      --to-last-log and are sending the output to the same
                      MySQL server. This way you could avoid an endless loop.
                      You would also like to use it when restoring after a
                      crash to avoid duplication of the statements you already
                      have. NOTE: you will need a SUPER privilege to use this
                      option.
  -F, --force-if-open 即使出现SQL错误也继续
                      Force if binlog was not closed properly.
                      (Defaults to on; use --skip-force-if-open to disable.)
  -f, --force-read    Force reading unknown binlog events.
                      使用该选项,如果mysqlbinlog读它不能识别的二进制日志事件,它会打印警告,忽略该事件并继续。没有该选项,如果mysqlbinlog读到此类事件则停止。
  -H, --hexdump       Augment output with hexadecimal and ASCII event dump.
                      在注释中显示日志的十六进制转储。该输出可以帮助复制过程中的调试。
  -h, --host=name     Get the binlog from server.主机名
  -l, --local-load=name
                      为指定目录中的LOAD DATA INFILE预处理本地临时文件 
                      Prepare local temporary files for LOAD DATA INFILE in the
                      specified directory.
  -o, --offset=#      跳过前N个条目,Skip the first N entries.
  -p, --password[=name] 
                      连接远程服务器的密码,Password to connect to remote server.
  --plugin-dir=name   Directory for client-side plugins.
  -P, --port=#        用于连接远程服务器的TCP/IP端口号
                      Port number to use for connection or 0 for default to, in
                      order of preference, my.cnf, $MYSQL_TCP_PORT,
                      /etc/services, built-in default (3306).
  --protocol=name     使用的连接协议,如TCP | SOCKET | PIPE | 
                      The protocol to use for connection (tcp, socket, pipe,
                      memory).
  -R, --read-from-remote-server 
                      从MySQL服务器读二进制日志。如果未给出该选项,任何连接参数选项将被忽略:--host、--password、--port、--protocol、--socket和--user。
                      Read binary logs from a MySQL server.
  -r, --result-file=name 
                      将输出指向给定的文件
                      Direct output to a given file.
  --server-id=#       Extract only binlog entries created by the server having
                      the given id.
  --set-charset=name  输出的时候设置SET NAMES character_set,Add 'SET NAMES character_set' to the output.
  -s, --short-form    只显示日志中包含的语句,不显示其它信息,
                      Just show regular queries: no extra info and no row-based
                      events. This is for testing only, and should not be used
                      in production systems. If you want to suppress
                      base64-output, consider using --base64-output=never
                      instead.
  -S, --socket=name   The socket file to use for connection.
  --start-datetime=name 
                      从二进制日志中第1个日期时间等于或晚于datetime参量的事件开始读取。datetime值相对于运行mysqlbinlog的机器上的本地时区。该值格式应符合DATETIME或TIMESTAMP数据类型。如“2004-12-25 11:25:56”
                      Start reading the binlog at first event having a datetime
                      equal or posterior to the argument; the argument must be
                      a date and time in the local time zone, in any format
                      accepted by the MySQL server for DATETIME and TIMESTAMP
                      types, for example: 2004-12-25 11:25:56 (you should
                      probably use quotes for your shell to set it properly).
  -j, --start-position=# 
                      从二进制日志中第1个位置等于N参量时的事件开始读。
                      Start reading the binlog at position N. Applies to the
                      first binlog passed on the command line.
  --stop-datetime=name
                      从二进制日志中第1个日期时间等于或晚于datetime参量的事件起停止读。 
                      Stop reading the binlog at first event having a datetime
                      equal or posterior to the argument; the argument must be
                      a date and time in the local time zone, in any format
                      accepted by the MySQL server for DATETIME and TIMESTAMP
                      types, for example: 2004-12-25 11:25:56 (you should
                      probably use quotes for your shell to set it properly).
  --stop-position=#   从二进制日志中第1个位置等于和大于N参量时的事件起停止读
                      Stop reading the binlog at position N. Applies to the
                      last binlog passed on the command line.
  -t, --to-last-log   在MySQL服务器中请求的二进制日志的结尾处不停止,而是继续打印直到最后一个二进制日志的结尾。如果将输出发送给同一台MySQL服务器,会导致无限循环。该选项要求--read-from-remote-server。
                      Requires -R. Will not stop at the end of the requested
                      binlog but rather continue printing until the end of the
                      last binlog of the MySQL server. If you send the output
                      to the same MySQL server, that may lead to an endless
                      loop.
  -u, --user=name     连接远程服务器的用户名,Connect to the remote server as username.
  -v, --verbose       对于ROW event重新构造SQL语句。
                      Reconstruct SQL statements out of row events. -v -v adds
                      comments on column data types.
  -V, --version       版本信息,Print version and exit.
  --open-files-limit=# 
                      Used to reserve file descriptors for use by this program.

通常使用开始时间结束时间读取,或者开始位置结束未知读取:

1
xiaobaoqiu@xiaobaoqiu:/var/log/mysql$ mysqlbinlog --start-datetime='2014-05-26 15:08:00' --stop-datetime='2014-05-26 15:10:00' mysql-bin.000302
1
xiaobaoqiu@xiaobaoqiu:/var/log/mysql$ mysqlbinlog --start-position=107 --stop-position=175 mysql-bin.000302

远程查看binlog示例:

1
2
mysqlbinlog -u username -p password -h127.0.0.1 -P3306 \
--read-from-remote-server --start-datetime='2014-05-25 00:00:00' --stop-datetime='2014-05-26 00:00:00' mysql-bin.000001 > mysql-bin.000001.txt

8.binlog用处

1. 数据恢复

为了演示,我们先清空表test_table.sql,目前表的数据为空:

1
2
3
4
5
mysql> truncate table test_table;
Query OK, 0 rows affected (0.07 sec)

mysql> select * from test_table;
Empty set (0.00 sec)

我们保存数据库的这个状态,即备份当前的数据:

1
xiaobaoqiu@xiaobaoqiu:/var/log/mysql$ mysqldump -uroot -p123456 test test_table  >  ~/test_table.sql

为例便于演示,我们重新开一个binlog文件(可以不新开binlog,利用–start-datetime参数):

1
2
mysql> flush logs;
Query OK, 0 rows affected (0.14 sec)

这个时候,会产生新的binlog文件:mysql-bin.000304。现在时候我们网数据库中插入数据了:

1
2
3
4
5
6
7
8
mysql> insert into test_table(`id`,`name`) values(1,'xiaobaoqiu1');
Query OK, 1 row affected (0.07 sec)

mysql> insert into test_table(`id`,`name`) values(2,'xiaobaoqiu2');
Query OK, 1 row affected (0.07 sec)

mysql> insert into test_table(`id`,`name`) values(3,'xiaobaoqiu3');
Query OK, 1 row affected (0.08 sec)

假设这个时候数据crash了(我这里又开一个binlog文件,再做drop table):

1
2
3
4
5
mysql> flush logs;
Query OK, 0 rows affected (0.14 sec)

mysql> drop table test_table;
Query OK, 0 rows affected (0.09 sec)

数据丢失了,我们就可以使用mysql-bin.000304来恢复数据,首先导入之前备份的数据,再执行我们的binlog文件来恢复数据:

1
xiaobaoqiu@xiaobaoqiu:/var/log/mysql$ mysql -uroot -p123456 test < ~/test_table.sql

回到了备份时候的状态:

1
2
mysql> select * from test_table;
Empty set (0.00 sec)

再使用我们的binlog来恢复数据:

1
xiaobaoqiu@xiaobaoqiu:/var/log/mysql$ mysqlbinlog mysql-bin.000304 | mysql -uroot -p123456

见证奇迹的时候,数据都回来了:

1
2
3
4
5
6
7
8
9
mysql> select * from test_table;
+----+-------------+
| id | name        |
+----+-------------+
|  1 | xiaobaoqiu1 |
|  2 | xiaobaoqiu2 |
|  3 | xiaobaoqiu3 |
+----+-------------+
3 rows in set (0.00 sec)

这里是一个简单的模拟,真是环境比这个肯定复杂得多,但是原理是一样的。

2. 数据复制(Replication)

比如我们熟悉的Master-Slave,复制过程示意图如下(要求Master开启binlog): 1. master记录二进制日志。在每个事务更新数据完成之前,master在二日志记录这些改变。MySQL将事务串行的写入二进制日志,即使事务中的语句都是交叉执行的。在事件写入二进制日志完成后,master通知存储引擎提交事务;

  1. slave将master的binary log拷贝到自己的中继日志(Relay log)。首先,slave开始一个工作线程(I/O线程)。I/O线程在master上打开一个普通的连接,然后开始binlog dump process。Binlog dump process从master的二进制日志中读取事件,如果已经跟上master,它会睡眠并等待master产生新的事件。I/O线程将这些事件写入中继日志;

  2. SQL slave thread(SQL从线程)处理该过程的最后一步。SQL线程从中继日志读取事件,并重放其中的事件而更新slave的数据,使其与master中的数据一致。只要该线程与I/O线程保持一致,中继日志通常会位于OS的缓存中,所以中继日志的开销很小。

mysql英文文档:

1
2
http://dev.mysql.com/doc/refman/5.0/en/replication.html
http://dev.mysql.com/doc/refman/5.0/en/faqs-replication.html

mysql中文文档:

1
http://dev.mysql.com/doc/refman/5.1/zh/replication.html

复制的使用场景:

  1. 向外扩展的解决方案(Scale-out solutions):所有的写操作在master上实施,而读操作分摊到多个salve机器上。这个模型能够提高写效率(因为master专门用于写),同时读操作效率也会岁着slave的数目增加而增长;

  2. 数据安全(Data security):因为数据已经复制到slave,因此可以在slave端做备份(backup);

  3. 分析(Analytics):master上产生实时数据,在slave端做数据分析,避免影响master性能;

  4. 远程数据分布(Long-distance data distribution):比如分公司需要在数据库数据基础上工作,使用slave复制一份数据,避免他们对master的永久连接的需求(requiring permanent access);

更详细的使用场景:

1
http://dev.mysql.com/doc/refman/5.0/en/replication-solutions.html

如何搭建复制环境:

1
http://dev.mysql.com/doc/refman/5.0/en/replication-howto.html

一些细节:

  1. 从服务器设置为复制主服务器的数据后,它连接主服务器并等待更新过程。如果主服务器失败,或者从服务器失去与主服务器之间的连接,从服务器保持定期尝试连接,直到它能够继续帧听更新。由–master-connect-retry选项控制重试间隔。 默认为60秒。

  2. MySQL使用3个线程来执行复制功能(其中1个在主服务器上,另两个在从服务器上。当发出START SLAVE时,从服务器创建一个I/O线程,以连接主服务器并让它发送记录在其二进制日志中的语句。主服务器创建一个线程将二进制日志中的内容发送到从服务器。该线程可以识别为主服务器上SHOW PROCESSLIST的输出中的Binlog Dump线程。从服务器I/O线程读取主服务器Binlog Dump线程发送的内容并将该数据拷贝到从服务器数据目录中的本地文件中,即中继日志。第3个线程是SQL线程,是从服务器创建用于读取中继日志并执行日志中包含的更新。

  3. 查看slave与master的时延:SHOW SLAVE STATUS命令中的Seconds_Behind_Master字段;原理是binlog中每个事件都有一个时间戳,当slave复制binlog事件时候,会保存最后读取的事件的时间戳;SHOW PROCESSLIST命令的Time列展示的是salve最后复制的事件的时间戳和salve机器时间之间的差值,单位为秒。

  4. 如何使用复制提高系统性能:配置一个master服务器只用于写,配置尽可能多的slave服务器,将读操作分到各个master和slave之间。同时,我们可以使用–skip-innodb(不使用InnoDB storage engine), –low-priority-updates(将INSERT, REPLACE, DELETE, UPDATE这些修改操作的优先级设置得比SELECT低)和–delay-key-write=ALL(Delayed key writing causes key buffers not to be flushed between writes for MyISAM tables)选项来启动salve达到加速slave的目的,这样就可以使用非事务的MyISAM表代替InnoDB表,排除了事物的代价从而更快速。

  5. slave段自动知道使用何种模式复制(ROW,STATEMENT,MIXED);

  6. 如何是slave不复制授权(GRANT)和撤销权限操作(REVOKE)的语句:使用–replicate-wild-ignore-table=mysql.%选项启动服务器。因为–replicate-wild-ignore-table=foo%.bar%表示slave不复制数据库名为foo表名为bar的操作,而权限的操作正式在Mysql数据库上的操作;

  7. 复制支持不同操作系统间(如master在Linux下而slave在Windows下)、支持不同的硬件架构(如master在64位机器上而slave在32位机器上);

9.canal

canal是阿里巴巴开发的一个增量订阅&消费的业务的组件,其工作原理就是把自己伪装成slave机器: 项目的wiki:

1
https://github.com/alibaba/canal/wiki

下面实验怎么使用canal:

1.开启binlog

canal的原理是基于mysql binlog技术,所以这里一定需要开启mysql的binlog写入功能,并且配置binlog模式为ROW,比如my.cf设置如下:

1
2
3
4
5
log_bin                 = /var/log/mysql/mysql-bin.log
expire_logs_days        = 10
max_binlog_size         = 100M
server_id               = 1
binlog_format           = ROW

重启mysql:

1
2
3
xiaobaoqiu@xiaobaoqiu:~$ sudo service mysql restart
mysql stop/waiting
mysql start/running, process 11918
2.分配权限

canal的原理是模拟自己为mysql的slave,所以这里一定需要做为mysql的slave的相关权限,下面包括创建用户,赋予权限(真是环境权限需要限制得严谨一下):

1
2
3
4
5
6
7
8
mysql> insert into mysql.user(Host,User,Password) values("localhost","canal",password("123456"));
Query OK, 1 row affected, 3 warnings (0.01 sec)

mysql> grant all on *.* to canal@"localhost" identified by "123456";
Query OK, 0 rows affected (0.00 sec)

mysql> flush privileges;
Query OK, 0 rows affected (0.00 sec)
3.下载canal

下载安装包,解压

1
2
3
4
5
wget http://canal4mysql.googlecode.com/files/canal.deployer-1.0.0.tar.gz

mkdir canal

tar zxvf canal.deployer-1.0.0.tar.gz  -C /canal

解压完成后,进入canal目录,结构如下:

1
2
3
4
drwxrwxr-x 2 xiaobaoqiu xiaobaoqiu 4096  5月 28 13:40 bin/
drwxrwxr-x 4 xiaobaoqiu xiaobaoqiu 4096  5月 28 13:37 conf/
drwxrwxr-x 2 xiaobaoqiu xiaobaoqiu 4096  5月 28 13:33 lib/
drwxrwxr-x 3 xiaobaoqiu xiaobaoqiu 4096  5月 28 13:40 logs/

其中bin目录包含启动和停止cannal的脚本:

1
2
-rwxr-xr-x 1 xiaobaoqiu xiaobaoqiu 2804  1月 24  2013 startup.sh*
-rwxr-xr-x 1 xiaobaoqiu xiaobaoqiu  902  1月 24  2013 stop.sh*

lib目录是一些cannal的依赖包.conf目录是canal的配置参数,下一小结会讲到.logs记录canal的日志.

4.配置参数

canal.properties文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
#################################################
#########         common argument     ############# 
#################################################
#master的server id
canal.id= 1
canal.ip=
canal.port= 11111
canal.zkServers=l-Xxxxxx:2181

# flush data to zk
canal.zookeeper.flush.period = 1000

# flush meta cursor/parse position to file
canal.file.data.dir = ${canal.conf.dir}
canal.file.flush.period = 1000
## memory store RingBuffer size, should be Math.pow(2,n)
canal.instance.memory.buffer.size = 16384
## memory store RingBuffer used memory unit size , default 1kb
canal.instance.memory.buffer.memunit = 1024 
## meory store gets mode used MEMSIZE or ITEMSIZE
canal.instance.memory.batch.mode = MEMSIZE

## detecing config
canal.instance.detecting.enable = false
canal.instance.detecting.sql = insert into retl.xdual values(1,now()) on duplicate key update x=now()
canal.instance.detecting.interval.time = 3
canal.instance.detecting.retry.threshold = 3
canal.instance.detecting.heartbeatHaEnable = false

# support maximum transaction size, more than the size of the transaction will be cut into multiple transactions delivery
canal.instance.transaction.size =  1024
# mysql fallback connected to new master should fallback times
canal.instance.fallbackIntervalInSeconds = 60

# network config
canal.instance.network.receiveBufferSize = 16384
canal.instance.network.sendBufferSize = 16384
canal.instance.network.soTimeout = 30

# binlog filter config
canal.instance.filter.query.dcl = false
canal.instance.filter.query.dml = false
canal.instance.filter.query.ddl = false

# binlog ddl isolation
canal.instance.get.ddl.isolation = false

#################################################
#########         destinations        ############# 
#################################################
#destinations
canal.destinations= flag
# conf root dir
canal.conf.dir = ../conf
# auto scan instance dir add/remove and start/stop instance
canal.auto.scan = true
canal.auto.scan.interval = 5

canal.instance.global.mode = spring 
canal.instance.global.lazy = false
canal.instance.global.manager.address = 127.0.0.1:1099
#canal.instance.global.spring.xml = classpath:spring/memory-instance.xml
canal.instance.global.spring.xml = classpath:spring/file-instance.xml
#canal.instance.global.spring.xml = classpath:spring/default-instance.xml

./flag/instance.properties配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#################################################
## mysql serverId
canal.instance.mysql.slaveId = 5711

# position info
canal.instance.master.address = 127.0.0.1:3306
canal.instance.master.journal.name = 
canal.instance.master.position = 
canal.instance.master.timestamp = 

#canal.instance.standby.address = 
#canal.instance.standby.journal.name =
#canal.instance.standby.position = 
#canal.instance.standby.timestamp = 

# username/password
canal.instance.dbUsername = root
canal.instance.dbPassword = 123456
canal.instance.defaultDatabaseName =
canal.instance.connectionCharset = UTF-8

# table regex
canal.instance.filter.regex = .*\\..*

#################################################
5.启动canal

启动:

1
2
3
4
xiaobaoqiu@xiaobaoqiu:~/canal$ sh ./bin/startup.sh

xiaobaoqiu@xiaobaoqiu:~/canal$ ps aux | grep canal
1000      2857  4.2  0.7 4814248 57364 pts/1   Sl   20:15   0:00 /usr/local/jdk1.6.0_45/bin/java -server -Xms2048m -Xmx3072m -Xmn1024m -XX:SurvivorRatio=2 -XX:PermSize=96m -XX:MaxPermSize=256m -Xss256k -XX:-UseAdaptiveSizePolicy -XX:MaxTenuringThreshold=15 -XX:+DisableExplicitGC -XX:+UseConcMarkSweepGC -XX:+CMSParallelRemarkEnabled -XX:+UseCMSCompactAtFullCollection -XX:+UseFastAccessorMethods -XX:+UseCMSInitiatingOccupancyOnly -XX:+HeapDumpOnOutOfMemoryError -Djava.awt.headless=true -Djava.net.preferIPv4Stack=true -Dfile.encoding=UTF-8 -DappName=otter-canal -Dlogback.configurationFile=/home/xiaobaoqiu/canal/bin/../conf/logback.xml -Dcanal.conf=/home/xiaobaoqiu/canal/bin/../conf/canal.properties -classpath .:/home/xiaobaoqiu/canal/bin/../conf:/home/xiaobaoqiu/canal/bin/../lib/zookeeper-3.4.5.jar:/home/xiaobaoqiu/canal/bin/../lib/zkclient-0.1.jar:/home/xiaobaoqiu/canal/bin/../lib/spring-2.5.6.jar:/home/xiaobaoqiu/canal/bin/../lib/slf4j-api-1.6.0.jar:/home/xiaobaoqiu/canal/bin/../lib/protobuf-java-2.4.1.jar:/home/xiaobaoqiu/canal/bin/../lib/oro-2.0.8.jar:/home/xiaobaoqiu/canal/bin/../lib/netty-3.2.5.Final.jar:/home/xiaobaoqiu/canal/bin/../lib/logback-core-1.0.6.jar:/home/xiaobaoqiu/canal/bin/../lib/logback-classic-1.0.6.jar:/home/xiaobaoqiu/canal/bin/../lib/log4j-1.2.14.jar:/home/xiaobaoqiu/canal/bin/../lib/jcl-over-slf4j-1.6.0.jar:/home/xiaobaoqiu/canal/bin/../lib/guava-r09.jar:/home/xiaobaoqiu/canal/bin/../lib/fastjson-1.1.26.jar:/home/xiaobaoqiu/canal/bin/../lib/commons-logging-1.1.1.jar:/home/xiaobaoqiu/canal/bin/../lib/commons-lang-2.6.jar:/home/xiaobaoqiu/canal/bin/../lib/commons-io-2.4.jar:/home/xiaobaoqiu/canal/bin/../lib/commons-beanutils-1.8.2.jar:/home/xiaobaoqiu/canal/bin/../lib/canal.store-1.0.14.jar:/home/xiaobaoqiu/canal/bin/../lib/canal.sink-1.0.14.jar:/home/xiaobaoqiu/canal/bin/../lib/canal.server-1.0.14.jar:/home/xiaobaoqiu/canal/bin/../lib/canal.protocol-1.0.14.jar:/home/xiaobaoqiu/canal/bin/../lib/canal.parse.driver-1.0.14.jar:/home/xiaobaoqiu/canal/bin/../lib/canal.parse.dbsync-1.0.14.jar:/home/xiaobaoqiu/canal/bin/../lib/canal.parse-1.0.14.jar:/home/xiaobaoqiu/canal/bin/../lib/canal.meta-1.0.14.jar:/home/xiaobaoqiu/canal/bin/../lib/canal.instance.spring-1.0.14.jar:/home/xiaobaoqiu/canal/bin/../lib/canal.instance.manager-1.0.14.jar:/home/xiaobaoqiu/canal/bin/../lib/canal.instance.core-1.0.14.jar:/home/xiaobaoqiu/canal/bin/../lib/canal.filter-1.0.14.jar:/home/xiaobaoqiu/canal/bin/../lib/canal.deployer-1.0.14.jar:/home/xiaobaoqiu/canal/bin/../lib/canal.common-1.0.14.jar:/home/xiaobaoqiu/canal/bin/../lib/aviator-2.2.1.jar:.:/usr/local/jdk1.6.0_45/lib:/usr/local/jdk1.6.0_45/jre/lib: com.alibaba.otter.canal.deployer.CanalLauncher

注意启动成功之后,会在bin目录下生成一个canal.pid的文件,里面保存了当前运行的canal实例的pid:

1
2
3
4
-rw-rw-r-- 1 xiaobaoqiu xiaobaoqiu    5  5月 28 20:15 canal.pid

xiaobaoqiu@xiaobaoqiu:~/canal/bin$ cat canal.pid 
2857

之后的stop.sh根据这个canal.pid文件来停止canal实例.

6.使用canal

canal拿到了master的binlog之后,我们可以解析,一段简单的代码如下(注意其中的11111参数和canal.properties中canal.port= 11111对应,flag和canal.properties中canal.destinations= flag对应,帐号密码是我没刚才分配的):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
public class App {
    public static void main(String args[]) {
        // 创建链接
        CanalConnector connector = CanalConnectors.newSingleConnector(new InetSocketAddress(AddressUtils.getHostIp(),
                11111), "flag", "canal", "123456");
        final int batchSize = 1000;
        int emptyCount = 0;
        try {
            connector.connect();
            connector.subscribe(".*\\..*");
            connector.rollback();
            System.out.println("Connect success...");
            final int totalEmtryCount = 1000;
            while (emptyCount < totalEmtryCount) {
                Message message = connector.getWithoutAck(batchSize); // 获取指定数量的数据
                long batchId = message.getId();
                int size = message.getEntries().size();
                if (batchId == -1 || size == 0) {
                    emptyCount++;
                    System.out.println("No data... : " + emptyCount);
                    try {
                        Thread.sleep(1000);
                    } catch (InterruptedException e) {
                    }
                } else {
                    emptyCount = 0;
                    printEntry(message.getEntries());
                }

                connector.ack(batchId); // 提交确认
            }

            System.out.println("No data too many times, exit...");
        } finally {
            connector.disconnect();
        }
    }

    private static void printEntry(List<CanalEntry.Entry> entrys) {
        for (CanalEntry.Entry entry : entrys) {
            if (entry.getEntryType() == CanalEntry.EntryType.TRANSACTIONBEGIN
                    || entry.getEntryType() == CanalEntry.EntryType.TRANSACTIONEND) {
                continue;
            }

            CanalEntry.RowChange rowChage = null;
            try {
                rowChage = CanalEntry.RowChange.parseFrom(entry.getStoreValue());
            } catch (Exception e) {
                throw new RuntimeException("ERROR ## parser of eromanga-event has an error , data:" + entry.toString(),
                        e);
            }

            CanalEntry.EventType eventType = rowChage.getEventType(); // 事件类型
            System.out.println(String.format("================> binlog[LogfileName=%s:LogfileOffset=%s] , " +
                    "name[DBName=%s,TableName=%s] , EventType : %s", entry
                    .getHeader().getLogfileName(), entry.getHeader().getLogfileOffset(), entry.getHeader()
                    .getSchemaName(), entry.getHeader().getTableName(), eventType));

            for (CanalEntry.RowData rowData : rowChage.getRowDatasList()) {
                if (eventType == CanalEntry.EventType.DELETE) { //delete事件,被delete的数据
                    printColumn(rowData.getBeforeColumnsList());
                } else if (eventType == CanalEntry.EventType.INSERT) {  //insert事件,insert的数据
                    printColumn(rowData.getAfterColumnsList());
                } else {
                    System.out.println("-------> before");
                    printColumn(rowData.getBeforeColumnsList());
                    System.out.println("-------> after");
                    printColumn(rowData.getAfterColumnsList());
                }
            }
        }
    }

    private static void printColumn(List<CanalEntry.Column> columns) {
        for (CanalEntry.Column column : columns) {
            System.out.println(column.getName() + " : " + column.getValue() + "    update=" + column.getUpdated());
        }
    }
}

在数据库中做插入:

1
2
mysql> insert into test_table(id, name) values(8, 'xiaobaoqiu8');
Query OK, 1 row affected (0.10 sec)

可以看见代码的响应:

1
2
3
4
5
6
No data... : 13
================> binlog[LogfileName=mysql-bin.000316:LogfileOffset=420] , name[DBName=test,TableName=test_table] , EventType : INSERT
id : 8    update=false
name : xiaobaoqiu8    update=false
No data... : 1
No data... : 2

10.参考

http://dev.mysql.com/doc/internals/en/binary-log.html

http://dev.mysql.com/doc/refman/5.7/en/mysqlbinlog.html

Java Concurrency in Practice 1 : 简介

书名:Java Concurrency in Practice,译Java并发编程实战

第一章:简介

多线程优势:

  1. 发挥多核处理器的能力;

问题:

  1. 如何保证线程安全;
  2. 多线程带来的代价,如线程切换,线程间通信等;