CAP满足选择

cp:保证一致性和分区容错性

ap:保证可用性和分区容错性

跟据架构实际情况选择,主要看当前业务主要保证高可用(最终一致性)还是数据强一致性

读写分离、数据分片

读写分离

master:增删改

salve:查

保证主从一致

数据分片

垂直分片:表的宽度无法满足业务需求

水平分片: 表的数据量太大

分片的数据表需要保证主键规则一直,且按分片数量取余操作

数据库中间件(Apache shardingsphare)

准备事项:

主数据源,从数据源*2

master需要给从数据源创建账号并且分配权限,从服务器取读取master的binnary.log

--创建slave用户设置密码

CREATE USER 'slave'@'%' IDENTIFIED WITH MYSQL_NATIVE_PASSWORD BY 'slave';

--授予复制权限

GRANT REPLICATION SLAVE ON . TO 'slave'@'%';

--刷新权限

FLUSH PRIVILEGES;

--查询binlog日志名,和位置 在从服务器中会用到

show MASTER STATUS;

--查询server-id

SHOW VARIABLES LIKE 'server_id';

--查询指定用户权限

SHOW GRANTS FOR 'slave1'@'%';

--查询最大连接数

show variables like 'max_connections';

创建slave之后配置文件中的server-id要改变不能和master一直master默认为1

--在从机上配置主从关系,其中binlog文件通过以上命令查询 ,必须确保 MASTER_LOG_FILE 、 MASTER_LOG_POS为最新

CHANGE MASTER TO
  MASTER_HOST='ishow-mysql-master',  
  MASTER_USER='slave1',
  MASTER_PASSWORD='slave1',
  MASTER_PORT=3306,
  MASTER_LOG_FILE='binlog.000005',
  MASTER_LOG_POS=157;

启动主从复制

START SLAVE;
--查看状态
SHOW SLAVE STATUS\G

配置问题记录

一般此处运行词条sql排查问题

select * from performance_schema.replication_applier_status_by_worker;

问题一
mysql> SHOW SLAVE STATUS\G
*************************** 1. row ***************************
               Slave_IO_State: Connecting to source
                  Master_Host: 192.168.10.74
                  Master_User: slave1
                  Master_Port: 3306
                Connect_Retry: 60
              Master_Log_File: binlog.000011
          Read_Master_Log_Pos: 32993
               Relay_Log_File: 5ea4d5e3d7f9-relay-bin.000001
                Relay_Log_Pos: 4
        Relay_Master_Log_File: binlog.000011
             Slave_IO_Running: Connecting
            Slave_SQL_Running: Yes
                Last_IO_Errno: 2003
                Last_IO_Error: Error connecting to source 'slave1@192.168.10.74:3306'. This was attempt 1/86400, with a delay of 60 seconds between attempts. Message: Can't connect to MySQL server on '192.168.10.74:3306' (113)

解决方法:如果实在docker内部则用容器名字(如果从服务器连接不上主服务器的,考虑网络原因,权限问题)

问题二
Last_SQL_Error: Coordinator stopped because there were error(s) in the worker(s). The most recent failure being: Worker 1 failed executing transaction 'ANONYMOUS' at source log binlog.000012, end_log_pos 5310. See error log and/or performance_schema.replication_applier_status_by_worker table for more details about this failure or others, if any.

跟据sql查询错误原因

使用方法一:

把处理的cache_size配置为1G

set global max_binlog_cache_size=10737418240;

使用方法二:

  • 确保两个数据库数据一致之后再进行同步持续生效

  • 确保i/o thread 和sql thread成功启动

其它问题解决方案

备用方案在主机操作

重启slave

stop slave;
start slave;

删除中继日志重新启用新的中级日志

reset slave;

删除binglog日志,重新开始就(重置)

reset master;

注意事项

binlog格式说明

  • binlog_format=STATEMENT:日志记录的是主机数据库的写指令,性能高,但是now()之类的函数以及获取系统参数的操作会出现主从数据不同步的问题。

  • binlog_format=ROW(默认):日志记录的是主机数据库的写后的数据,批量操作时性能较差,解决now()或者 user()或者 @@hostname等操作在主从机器上不一致的问题。

  • binlog_format=MIXED:是以上两种level的混合使用,有函数用ROW,没函数用STATEMENT,但是无法识别系统变量

可选配置binlog-ignore-db、binlog-do-db

中间件接入

业务服务器只需要连接中间件,连接数据库的操作交给中间件。

shardingsphare-jdbc:

配置文件

依赖

<!-- shardingsphere依赖 -->
        <dependency>
            <groupId>org.apache.shardingsphere</groupId>
            <artifactId>sharding-jdbc-spring-boot-starter</artifactId>
            <version>4.1.1</version>
        </dependency>

        <dependency>
            <groupId>com.alibaba</groupId>
            <artifactId>druid-spring-boot-starter</artifactId>
            <version>1.2.14</version>
        </dependency>

        <dependency>
            <groupId>mysql</groupId>
            <artifactId>mysql-connector-java</artifactId>
        </dependency>

        <dependency>
            <groupId>com.baomidou</groupId>
            <artifactId>mybatis-plus-boot-starter</artifactId>
            <version>3.5.0</version>
        </dependency>

        <dependency>
            <groupId>com.ishowcs</groupId>
            <artifactId>ishow-common-datasource</artifactId>
            <version>0.0.1-SNAPSHOT</version>
        </dependency>

yml配置

sharding-sphere:
    datasource:
      names:
        master,slave1
      # 主数据源
      master:
        type: com.alibaba.druid.pool.DruidDataSource
        driver-class-name: com.mysql.cj.jdbc.Driver
        url: jdbc:mysql://${ishowcs.server-mysql}:3308/${ishowcs.database}?useUnicode=true&serverTimezone=Asia/Shanghai&characterEncoding=utf8&zeroDateTimeBehavior=convertToNull&useSSL=false
        username: root
        password: ishow123321@
        initial-size: 60
        min-idle: 60
        max-active: 180
        max-wait: 60000
        time-between-eviction-runs-millis: 60000
        min-evictable-idle-time-millis: 300000
        max-evictable-idle-time-millis: 300000
        validation-query: SELECT 1 FROM DUAL
        keep-alive: true
        test-while-idle: true
        test-on-borrow: true
        test-on-return: false
        filters: stat,wall,log4j2
        connect-timeout: 60000
        socket-timeout: 60000
        connectionProperties: druid.stat.mergeSql=true;druid.stat.slowSqlMillis=5000
      # 从数据源
      slave1:
        type: com.alibaba.druid.pool.DruidDataSource
        driver-class-name: com.mysql.cj.jdbc.Driver
        url: jdbc:mysql://${ishowcs.server-mysql-slave}:3307/${ishowcs.database}?useUnicode=true&serverTimezone=Asia/Shanghai&characterEncoding=utf8&zeroDateTimeBehavior=convertToNull&useSSL=false
        username: root
        password: ishow123321@
        initial-size: 50
        min-idle: 50
        max-active: 100
        max-wait: 60000
        time-between-eviction-runs-millis: 60000
        min-evictable-idle-time-millis: 300000
        max-evictable-idle-time-millis: 300000
        validation-query: SELECT 1 FROM DUAL
        keep-alive: true
        test-while-idle: true
        test-on-borrow: true
        test-on-return: false
        filters: stat,wall,log4j2
        connect-timeout: 60000
        socket-timeout: 60000
        connectionProperties: druid.stat.mergeSql=true;druid.stat.slowSqlMillis=5000
    rules:
     readwrite-splitting:
      data-sources:
        mds:
          load-balancer-name: read_random # 读写分离规则自定义命名
          type: Static
          props:
            write-data-source-name: master
            read-data-source-names: slave1
      load-blancers:
        read_random:
          type: ROUND_ROBIN  # 轮询负载均衡
    props:
      # 开启SQL显示, 默认false
      sql:
        show: true

自定义数据源

package com.ishowcs.test;

import com.baomidou.dynamic.datasource.DynamicRoutingDataSource;
import com.baomidou.dynamic.datasource.spring.boot.autoconfigure.DynamicDataSourceProperties;
import lombok.NonNull;
import lombok.RequiredArgsConstructor;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.boot.context.properties.ConfigurationProperties;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
import org.springframework.context.annotation.Lazy;
import org.springframework.context.annotation.Primary;

import javax.sql.DataSource;
import javax.validation.constraints.NotNull;

/**
 * @Author:xieHuaiCheng
 * @Package:com.ishowcs.test
 * @Project:cloud-ishow-api
 * @name:ShardingDataSourceConfig
 * @Date:2024/7/30 15:18
 * @Filename:ShardingDataSourceConfig
 */
@Configuration
@ConfigurationProperties(prefix = "spring.datasource.dynamic.datasource")
// 使用lombok的构造器注入时,需要使用@RequiredArgsConstructor注解,且变量必须用final或者@NonNull,@NotNull注解修饰
@RequiredArgsConstructor(onConstructor_ = @Autowired)
public class MyDataSourceConfiguration  {
    @NonNull // lombok非空注解
    private DynamicDataSourceProperties properties;
    @Lazy
    @NotNull // org.jetbrains.annotations非空注解
    private  DataSource shardingDataSource;
    @Primary
    @Bean("shardingDataSource")
    public DataSource dataSource() {
        DynamicRoutingDataSource dataSource = new DynamicRoutingDataSource();
        dataSource.setPrimary(properties.getPrimary());
        dataSource.setStrict(properties.getStrict());
        dataSource.setStrategy(properties.getStrategy());
        dataSource.setP6spy(properties.getP6spy());
        dataSource.setSeata(properties.getSeata());
        dataSource.addDataSource("sharding", shardingDataSource);
        return dataSource;
    }
}

代码配置问题记录

问题一

配置读写分离时、在进行业务读写的时候数据源混乱

增强版的jdbc

shardingsphare-proxy: