Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Code Block
GSH:
args = new String[1];
args[0] = "false";
edu.internet2.middleware.grouper.app.tableSync.TableSyncCreateTables.main(args);

Performance

The performance will be limited by how many changes there are and how quickly it can get the data on both sides.

From a local mysql to a local mysql, if there are 25000 rows and 25000 inserts, it takes 2 minutes.  If there are 25000 rows and no updates, it takes 0.3 seconds.

This will select data in batches, insert/update/delete in batches.

Configuration

All settings defaults (grouper.client.properties)

Code Block
################################
## Sync database table settings
################################

# the grouping column is what is uniquely selected, and then batched through to get data, default across all sql jobs
# {valueType: "integer"}
# grouperClient.syncTableDefault.groupingSize = 10000

# size of jdbc batches, default across all sql jobs
# {valueType: "integer"}
# grouperClient.syncTableDefault.batchSize = 800

# number of bind vars in select, default across all sql jobs
# {valueType: "integer"}
# grouperClient.syncTableDefault.maxBindVarsInSelect = 900

# default database for all sql jobs for status tabls (e.g. grouper_sync*)
# {valueType: "string"}
# grouperClient.syncTableDefault.statusDatabase = grouper

# default switch from incremental to full if the number of incrementals is over this threshold
# {valueType: "integer"}
# grouperClient.syncTableDefault.switchFromIncrementalToFullIfOverRecords = 300000

# switch from incremental to full if the number of incrementals is over the threshold, this is full sync to switch to
# fullSyncChangeFlag, fullSyncFull, fullSyncGroups
# {valueType: "string"}
# grouperClient.syncTableDefault.switchFromIncrementalToFullSubtype = fullSyncFull

# switch from incremental to group (if theres a grouping col) if the number of incrementals for a certain group
# {valueType: "integer"}
# grouperClient.syncTableDefault.switchFromIncrementalToGroupIfOverRecordsInGroup = 50000

# switch from incremental to full if the number of groups (and records over threshold) is over this threshold
# i.e. needs to be over 100 groups and over 300000 records
# {valueType: "integer"}
# grouperClient.syncTableDefault.switchFromIncrementalToFullIfOverGroupCount = 100


...

Example: demo server syncDDL is created with the above.  If you have a gsh container you probably want to volume bind /opt/grouper/grouper.apiBinary/ddlScripts to a space outside the container.  Then you can run the SQL generated with gsh -registry -runsqlfile ddlScripts/sqlfile_generated.sql to create the status tables after you confirm the DDL is good.

Overall flow of syncs

  1. See if needs to run or exit

    Sync typeCheck whenHow check
    fullat startupIf a full has run in the last X (configurable, e.g. 12 hours),
    then dont run
    fullat startup
    1. register that this job wants to run
    2. see if another job is running
    3. wait until it isnt running if so
    4. register as running if nothing else running
    5. wait a couple more seconds
    6. if nothing else running then run
    incrementalat startup

    if another job is running or pending then wait

    incrementalthroughout jobif another job is running or pending then exit


  2. Select all of something.  Note, if there is a source and destination query, do one in a thread, and the other in the current thread.  Handle exceptions appropriately.  Wait for both to finish before proceeding.

    Sync typeSync subtypeSelect whatFrom whereExampleMore info
    fullfullSyncFullselect all records and all columns
    1. source
    2. destination
    select * from tableget all records from both sides
    fullfullSyncGroupingsselect all distinct groupings
    1. source
    2. destination
    select distinct grouping_col from tableeither:
    1. select all one col primary keys
    2. select one col that groups sets of records together (e.g. group_name of memberships)
    fullfullSyncChangeFlagselect all primary keys and a column
    that is a change flag
    1. source
    2. destination
    select uuid, last_updated from table

    This can be a last updated date (to the milli) or a checksum string or something

    fullfullSyncMetadataselect all distinct groupings
    1. source
    2. destination
    select distinct grouping_col from table
    1. if grouping not in dest then add it
    2. if grouping is in dest and not source then delete it

    Useful if groups get renamed or tags get added/removed

    Does not sync all memberships and does not count as a full sync 

    incrementalincrementalAllColumns

    get all incrementals that have happened since the last check

    including all columns

    1. source or change log table
    select * from table where last_updated > last_checkedIf the source table as a last_updated or numeric increasing col.  Note, this will not process deletes if off of source table since deleted rows wont be there
    incrementalincrementalPrimaryKey

    get all incrementals and each row has the primary 

    key to sync

    1. change log table
    select primary_key_col0, primary_key_col1 from change_log_table where last_updated > last_checkedIf the change_log_table doesnt have all columns, but might also have deletes



  3. Initial compare

    Sync typeSync subtypeInitial compare
    fullfullSyncFull

    If primary key exists in destination and not source, then delete in primary key in the destination

    If all columns row in source matches all columns of row in destination then remove from both lists

    Compare all records and batch up the inserts/updates/deletes, done

    fullfullSyncGroupings

    If a grouping exists in destination and not source, then delete that grouping in destination

    fullfullSyncChangeFlag

    If primary key exists in destination and not source, then delete in primary key in the destination

    If change flag of row in source matches change flag of row in destination then remove from both lists

    incremental*NA



  4. Switch job type?

    Note: if an incremental switches to a grouping or full sync, then it wont yield to a real full sync...


    Sync typeSync subtypeIf this occursSwitch to
    incremental*

    Number of records is greater than X (configurable, e.g. 10k), 

    and if grouping, there are fewer than Y groupings (configurable, e.g. 5)

    1. Capture current timestamp or max record in change_log
    2. Full sync everything
    3. Updated last processed to "a", skip records until "a"
    incremental*

    If grouping, and if theres a fullSyncGroupings job, 

    and if a grouping has more than X (configurable, e.g. 5k)

    1. Capture current timestamp or max record in change_log
    2. Do a grouping sync on those groupings (e.g. all records for a group)
    3. Skip records in that grouping until "a"



  5. Batch up requests (e.g. process a certain number of records at once.  Note; this can be done in several threads


    Sync typeSync subtypeSelect whatFrom whereNumber of recordsExample
    fullfullSyncFullNA, This job is already done


    fullfullSyncGroupingsselect all columns from source and destination that are between two grouping indexes
    1. source
    2. destination

    Approx 10k or 100k, might be unknown if grouping col is not primary key.  Based on grouping size configuration

    e.g. for groups, might be 5k groups at once, for people, might be 50k people at once

    select * from table where grouping_col > ? and grouping col < ?
    fullfullSyncChangeFlagselect all primary keys and a column
    that is a change flag
    1. source
    2. destination

    50-900

    constrained by bind var max of 1000

    select * from table where primary_key_col0 = ? and primary_key_col1 = ?
    incrementalincrementalFullAllColumns

    get all incrementals that have happened since the last check

    including all columns

    1. destination

    50-900

    constrained by bind var max of 1000

    select * from table where primary_key_col0 = ? and primary_key_col1 = ?
    incrementalincrementalFullPrimaryKey

    get all incrementals and each row has the primary 

    key to sync

    1. source
    2. destination

    50-900

    constrained by bind var max of 1000

    select * from table where primary_key_col0 = ? and primary_key_col1 = ?



  6. Process records

    Note these use prepared statement batching, so all the deletes happen in batches of 100 (configurable), updates happen in batches of 100 (configurable), and inserts happen in batches of 100 (configurable)

    1. If record exists in source and not in destination by primary key, then insert the destination record
    2. If record exists in destination and not source by primary key, then delete the destination record
    3. If a primary key exists in source and destination, but all cols do not match, then update the destination record
    4. If a primary key exists in source and destination, and all cols match, then ignore the record

  7. Move pointer forward to max number/date processed

Performance

The performance will be limited by how many changes there are and how quickly it can get the data on both sides.

From a local mysql to a local mysql, if there are 25000 rows and 25000 inserts, it takes 2 minutes.  If there are 25000 rows and no updates, it takes 0.3 seconds.

This will select data in batches, insert/update/delete in batches.

Configuration

All settings defaults (grouper.client.properties)

Code Block
################################
### grouperSync clientdatabase or loader database key where copying data from
# the 3rd part is the sync_id.  in this case "personSource".  Defaults to "grouper"table settings
################################

# the grouping column is what is uniquely selected, and then batched through to get data, default across all sql jobs
# {valueType: "stringinteger"}
#grouperClient# grouperClient.syncTablesyncTableDefault.personSource.databaseFromgroupingSize = 10000

# tablesize orof view where copying data fromjdbc batches, includedefault theacross schemaall ifsql neededjobs
# {valueType: "stringinteger"}
#grouperClient# grouperClient.syncTablesyncTableDefault.personSource.tableFrombatchSize = 800

# groupernumber of clientbind orvars loaderin databaseselect, keydefault whereacross copyingall datasql tojobs
# {valueType: "stringinteger"}
#grouperClient# grouperClient.syncTablesyncTableDefault.personSource.databaseTomaxBindVarsInSelect = 900

# grouper client or loader database key (readonly) if large queries should be performed against a different databasedefault database for all sql jobs for status tabls (e.g. grouper_sync*)
# {valueType: "string"}
#grouperClient# grouperClient.syncTablesyncTableDefault.personSource.databaseToReadonlystatusDatabase = grouper

# table or view where copying data to, include the schema if needed default switch from incremental to full if the number of incrementals is over this threshold
# {valueType: "stringinteger"}
#grouperClient# grouperClient.syncTablesyncTableDefault.personSource.tableToswitchFromIncrementalToFullIfOverRecords = PERSON_SOURCE_TEMP300000

# columns must match in from and to tables, you can specify columns or do all with an asterisk switch from incremental to full if the number of incrementals is over the threshold, this is full sync to switch to
# fullSyncChangeFlag, fullSyncFull, fullSyncGroups
# {valueType: "string"}
#grouperClient# grouperClient.syncTablesyncTableDefault.personSource.columnsswitchFromIncrementalToFullSubtype = *fullSyncFull

# ifswitch therefrom isincremental ato primarygroup key,(if listtheres it, else list the composite keys.  note, this doesnt
# have to literally be the database prmiary key, just need to be a unique col(s) in tablea grouping col) if the number of incrementals for a certain group
# {valueType: "stringinteger"}
# grouperClient.syncTablesyncTableDefault.personSource.primaryKeyColumnsswitchFromIncrementalToGroupIfOverRecordsInGroup = penn_id50000

# switch from incremental to full if the number doingof fullSyncChangeFlaggroups (look for a col that says if the rows are equal, e.g. a timestamp or a checksum)and records over threshold) is over this threshold
# i.e. needs to be over 100 groups and over 300000 records
# {valueType: "integer"}
# grouperClient.syncTableDefault.switchFromIncrementalToFullIfOverGroupCount = 100


All settings specific to provisioner (in this case "personSource" is a variable for the provisioner (grouper.client.properties)

Code Block

# grouper client or loader database key where copying data from
# the 3rd part is the sync_id.  in this case "personSource".  Defaults to "grouper""string"}
# grouperClient.syncTable.personSource.changeFlagColumn = check_sum

# the grouping column is what is uniquely selected, and then batched through to get data.  Optional.
# for groups this should be the group uuid
# {valueType: "string"}
# grouperClient#grouperClient.syncTable.personSource.groupingColumndatabaseFrom = penn_id

# thetable groupingor columnview iswhere whatcopying isdata uniquely selectedfrom, andinclude thenthe batched through to get data, defaults to global settingschema if needed
# {valueType: "integerstring"}
# grouperClient#grouperClient.syncTable.personSource.groupingSizetableFrom = 10000

# size of jdbc batches grouper client or loader database key where copying data to
# {valueType: "integerstring"}
# grouperClient#grouperClient.syncTable.personSource.batchSizedatabaseTo = 800

# number of bind vars in select grouper client or loader database key (readonly) if large queries should be performed against a different database
# {valueType: "integerstring"}
# grouperClient#grouperClient.syncTable.personSource.maxBindVarsInSelectdatabaseToReadonly = 900

# switch from incrementaltable or view where copying data to, fullinclude ifthe theschema numberif of incrementals is over this threshold
# if this is less than 0, then it will not switch from incremental to fullneeded
# {valueType: "string"}
#grouperClient.syncTable.personSource.tableTo = PERSON_SOURCE_TEMP

# columns must match in from and to tables, you can specify columns or do all with an asterisk
# {valueType: "integerstring"}
# grouperClient#grouperClient.syncTable.personSource.switchFromIncrementalToFullIfOverRecords = 300000

# switch from incremental to full if the number of incrementals is over the threshold, this is full sync to switch to
# fullSyncChangeFlag, fullSyncFull, fullSyncGroupscolumns = *

# if there is a primary key, list it, else list the composite keys.  note, this doesnt
# have to literally be the database prmiary key, just need to be a unique col(s) in table
# {valueType: "string"}
# grouperClient.syncTable.personSource.switchFromIncrementalToFullSubtypeprimaryKeyColumns = fullSyncFullpenn_id

# switchif from incremental to groupdoing fullSyncChangeFlag (iflook theresfor a grouping col) if the number of incrementals for a certain group
# if this is less than 0, then it will not switch from incremental to groupthat says if the rows are equal, e.g. a timestamp or a checksum)
# {valueType: "integerstring"}
# grouperClient.syncTable.personSource.switchFromIncrementalToGroupIfOverRecordsInGroupchangeFlagColumn = 50000check_sum

# switchthe fromgrouping incrementalcolumn tois fullwhat ifis theuniquely numberselected, ofand groupsthen (andbatched recordsthrough overto threshold)get isdata. over this thresholdOptional.
# i.e. needs to be over 100 groups and over 300000 recordsfor groups this should be the group uuid
# {valueType: "integerstring"}
# grouperClient.syncTable.personSource.switchFromIncrementalToFullIfOverGroupCountgroupingColumn = 100penn_id

# ifthe queryinggrouping acolumn real time table, thisis what is theuniquely tableselected, needsand tothen havebatched primary key columns.
# each record will check the source and destination and see what to dothrough to get data, defaults to global setting
# {valueType: "stringinteger"}
# grouperClient.syncTable.personSource.incrementalPrimaryKeyTablegroupingSize = real_time_table10000

# namesize of ajdbc column that has a sequence or last updated date.  
# must be in the incrementalPrimaryKeyTable if incremental primary key syncbatches
# {valueType: "integer"}
# grouperClient.syncTable.personSource.batchSize = 800

# number of bind vars in select
# {valueType: "stringinteger"}
# grouperClient.syncTable.personSource.incrementalProgressColumnmaxBindVarsInSelect = last_updated900

# nameswitch offrom aincremental columnto thatfull hasif athe sequencenumber orof lastincrementals updatedis date.over this threshold
# must be in the main data table if incremental all columns if this is less than 0, then it will not switch from incremental to full
# {valueType: "stringinteger"}
# grouperClient.syncTable.personSource.incrementalAllColumnsColumnswitchFromIncrementalToFullIfOverRecords = last_updated300000

# database where status table is.  defaults to "grouper" switch from incremental to full if the number of incrementals is over the threshold, this is full sync to switch to
# fullSyncChangeFlag, fullSyncFull, fullSyncGroups
# {valueType: "string"}
# grouperClient.syncTable.personSource.statusDatabaseswitchFromIncrementalToFullSubtype = grouperfullSyncFull



grouper-loader.properties to schedule jobs

Code Block
################################
## Table sync jobs
## tableSync jobs should use class: edu.internet2.middleware.grouper.app.tableSync.TableSyncOtherJob
## and include a setting to point to the grouperClient config, if not same: otherJob.<otherJobName>.grouperClientTableSyncConfigKey = key
## this is the subtype of job to run: otherJob.<otherJobName>.syncType = fullSyncFull    
## (can be: fullSyncFull, fullSyncGroups, fullSyncChangeFlag, incrementalAllColumns, incrementalPrimaryKey)
################################

# Object Type Job class
# {valueType: "class", mustExtendClass: "edu.internet2.middleware.grouper.app.loader.OtherJobBase", mustImplementInterface: "org.quartz.Job"}
# otherJob.membershipSync.class = edu.internet2.middleware.grouper.app.tableSync.TableSyncOtherJob

# Object Type Job cron# switch from incremental to group (if theres a grouping col) if the number of incrementals for a certain group
# if this is less than 0, then it will not switch from incremental to group
# {valueType: "integer"}
# grouperClient.syncTable.personSource.switchFromIncrementalToGroupIfOverRecordsInGroup = 50000

# switch from incremental to full if the number of groups (and records over threshold) is over this threshold
# i.e. needs to be over 100 groups and over 300000 records
# {valueType: "stringinteger"}
# otherJobgrouperClient.syncTable.membershipSyncpersonSource.quartzCronswitchFromIncrementalToFullIfOverGroupCount = 0 0/30 * * * ?

#100

# if querying a real time table, this is the keytable, inneeds the grouper.client.properties that represents this job
# {valueType: "string"}
# otherJob.membershipSync.grouperClientTableSyncConfigKey = memberships

# fullSyncFull, fullSyncGroups, fullSyncChangeFlag, incrementalAllColumns, incrementalPrimaryKeyto have primary key columns.
# each record will check the source and destination and see what to do
# {valueType: "string"}
# otherJob.membershipSync.syncType = fullSyncFull


Logging

Make sure log4j jar is in the classpath, and configure log4j.properties

Code Block
log4j.logger.edu.internet2.middleware.grouperClient.jdbc.tableSync.GcTableSyncLog =  DEBUG, grouper_stdout
log4j.additivity.edu.internet2.middleware.grouperClient.jdbc.tableSync.GcTableSyncLog = false

You will see entries like this.  Note, there is an entry every minute for in progress jobs (finalLog: false)

Code Block
2019-05-06 09:13:57,246: [Thread-20] DEBUG GrouperClientLog.debug(92) -  - fullSync: true, key: personSourceTest, finalLog: false, state: inserts, databaseFrom: grouper, tableFrom: testgrouper_sync_subject_from, databaseTo: grouper, tableTo: testgrouper_sync_subject_to, totalCountFrom: 25000, fromGroupingUniqueValues: 25000, groupingsToDelete: 0, numberOfBatches: 3, currentBatch: 2, rowsSelectedFrom: 25000, rowsSelectedTo: 0, sqlBatchExecute: 13 of 25
2019-05-06 09:14:45,341: [main] DEBUG GrouperClientLog.debug(92) -  - fullSync: true, key: personSourceTest, finalLog: true, state: deletes, databaseFrom: grouper, tableFrom: testgrouper_sync_subject_from, databaseTo: grouper, tableTo: testgrouper_sync_subject_to, totalCountFrom: 25000, fromGroupingUniqueValues: 25000, groupingsToDelete: 0, numberOfBatches: 3, currentBatch: 2, rowsSelectedFrom: 25000, rowsSelectedTo: 0, sqlBatchExecute: 24 of 25, rowsNeedInsert: 25000, took: 0:01:48.181
2019-05-06 09:14:45,692: [main] DEBUG GrouperClientLog.debug(92) -  - fullSync: true, key: personSourceTest, finalLog: true, state: deletes, databaseFrom: grouper, tableFrom: testgrouper_sync_subject_from, databaseTo: grouper, tableTo: testgrouper_sync_subject_to, totalCountFrom: 25000, fromGroupingUniqueValues: 25000, toGroupingUniqueValues: 25000, groupingsToDelete: 0, numberOfBatches: 3, currentBatch: 2, rowsSelectedFrom: 25000, rowsSelectedTo: 25000, rowsWithEqualData: 25000, took: 0:00:00.330
2019-05-06 09:14:46,032: [main] DEBUG GrouperClientLog.debug(92) -  - fullSync: true, key: personSourceTest, finalLog: true, state: deletes, databaseFrom: grouper, tableFrom: testgrouper_sync_subject_from, databaseTo: grouper, tableTo: testgrouper_sync_subject_to, totalCountFrom: 25000, fromGroupingUniqueValues: 25000, toGroupingUniqueValues: 25000, groupingsToDelete: 1, sqlBatchExecute: 0 of 1, numberOfBatches: 3, currentBatch: 2, rowsSelectedFrom: 25000, rowsSelectedTo: 24999, rowsWithEqualData: 24998, rowsNeedInsert: 1, rowsNeedUpdate: 1, took: 0:00:00.318

Example: sync subject source

...

Code Block
# grouper client database key where copying data from
# {valueType: "string"}
grouperClient.syncTable.personSource.databaseFrom = pcom

# table or view where copying data from
# {valueType: "string"}
grouperClient.syncTable.personSource.tableFrom = PERSON_SOURCE

# grouper client database key where copying data to
# {valueType: "string"}
grouperClient.syncTable.personSource.databaseTo = awsDev

# table or view where copying data to
# {valueType: "string"}
grouperClient.syncTable.personSource.tableTo = PERSON_SOURCE_TEMP

# columns must match in from and to tables, you can specify columns or do all with an asterisk
# {valueType: "string"}
grouperClient.syncTable.personSource.columns = *

# if there is a primary key, list it, else list the composite keys
# {valueType: "string"}
grouperClient.syncTable.personSource.primaryKeyColumns = penn_id

# the grouping column is what is uniquely selected, and then batched through to get data.
# {valueType: "string"}
grouperClient.syncTable.personSource.groupingColumn = penn_id

# the grouping column is what is uniquely selected, and then batched through to get data, defaults to 10000
# {valueType: "integer"}
grouperClient.syncTable.personSource.groupingSize = 10000

# size of jdbc batches
# {valueType: "integer"}
grouperClient.syncTable.personSource.batchSize = 50


...

grouperClient.syncTable.personSource.incrementalPrimaryKeyTable = real_time_table

# name of a column that has a sequence or last updated date.  
# must be in the incrementalPrimaryKeyTable if incremental primary key sync
# {valueType: "string"}
# grouperClient.syncTable.personSource.incrementalProgressColumn = last_updated

# name of a column that has a sequence or last updated date.  
# must be in the main data table if incremental all columns
# {valueType: "string"}
# grouperClient.syncTable.personSource.incrementalAllColumnsColumn = last_updated

# database where status table is.  defaults to "grouper"
# {valueType: "string"}
# grouperClient.syncTable.personSource.statusDatabase = grouper



grouper-loader.properties to schedule jobs

Code Block
################################
## Table sync jobs
## tableSync jobs should use class: edu.internet2.middleware.grouper.app.tableSync.TableSyncOtherJob
## and include a setting to point to the grouperClient config, if not same: otherJob.<otherJobName>.grouperClientTableSyncConfigKey = key
## this is the subtype of job to run: otherJob.<otherJobName>.syncType = fullSyncFull    
## (can be: fullSyncFull, fullSyncGroups, fullSyncChangeFlag, incrementalAllColumns, incrementalPrimaryKey)
################################

# Object Type Job class
# {valueType: "class", mustExtendClass: "edu.internet2.middleware.grouper.app.loader.OtherJobBase", mustImplementInterface: "org.quartz.Job"}
# otherJob.membershipSync.class = edu.internet2.middleware.grouper.app.tableSync.TableSyncOtherJob

# Object Type Job cron
# {valueType: "string"}
# otherJob.membershipSync.quartzCron = 0 0/30 * * * ?

# this is the key in the grouper.client.properties that represents this job
# {valueType: "string"}
# otherJob.membershipSync.grouperClientTableSyncConfigKey = memberships

# fullSyncFull, fullSyncGroups, fullSyncChangeFlag, incrementalAllColumns, incrementalPrimaryKey
# {valueType: "string"}
# otherJob.membershipSync.syncType = fullSyncFull



Logging

Make sure log4j jar is in the classpath, and configure log4j.properties

Code Block
log4j.logger.edu.internet2.middleware.grouperClient.jdbc.tableSync.GcTableSyncLog =  DEBUG, grouper_stdout
log4j.additivity.edu.internet2.middleware.grouperClient.jdbc.tableSync.GcTableSyncLog = false

You will see entries like this.  Note, there is an entry every minute for in progress jobs (finalLog: false)

Code Block
2019-05-06 09:13:57,246: [Thread-20] DEBUG GrouperClientLog.debug(92) -  - fullSync: true, key: personSourceTest, finalLog: false, state: inserts, databaseFrom: grouper, tableFrom: testgrouper_sync_subject_from, databaseTo: grouper, tableTo: testgrouper_sync_subject_to, totalCountFrom: 25000, fromGroupingUniqueValues: 25000, groupingsToDelete: 0, numberOfBatches: 3, currentBatch: 2, rowsSelectedFrom: 25000, rowsSelectedTo: 0, sqlBatchExecute: 13 of 25
2019-05-06 09:14:45,341: [main] DEBUG GrouperClientLog.debug(92) -  - fullSync: true, key: personSourceTest, finalLog: true, state: deletes, databaseFrom: grouper, tableFrom: testgrouper_sync_subject_from, databaseTo: grouper, tableTo: testgrouper_sync_subject_to, totalCountFrom: 25000, fromGroupingUniqueValues: 25000, groupingsToDelete: 0, numberOfBatches: 3, currentBatch: 2, rowsSelectedFrom: 25000, rowsSelectedTo: 0, sqlBatchExecute: 24 of 25, rowsNeedInsert: 25000, took: 0:01:48.181
2019-05-06 09:14:45,692: [main] DEBUG GrouperClientLog.debug(92) -  - fullSync: true, key: personSourceTest, finalLog: true, state: deletes, databaseFrom: grouper, tableFrom: testgrouper_sync_subject_from, databaseTo: grouper, tableTo: testgrouper_sync_subject_to, totalCountFrom: 25000, fromGroupingUniqueValues: 25000, toGroupingUniqueValues: 25000, groupingsToDelete: 0, numberOfBatches: 3, currentBatch: 2, rowsSelectedFrom: 25000, rowsSelectedTo: 25000, rowsWithEqualData: 25000, took: 0:00:00.330
2019-05-06 09:14:46,032: [main] DEBUG GrouperClientLog.debug(92) -  - fullSync: true, key: personSourceTest, finalLog: true, state: deletes, databaseFrom: grouper, tableFrom: testgrouper_sync_subject_from, databaseTo: grouper, tableTo: testgrouper_sync_subject_to, totalCountFrom: 25000, fromGroupingUniqueValues: 25000, toGroupingUniqueValues: 25000, groupingsToDelete: 1, sqlBatchExecute: 0 of 1, numberOfBatches: 3, currentBatch: 2, rowsSelectedFrom: 25000, rowsSelectedTo: 24999, rowsWithEqualData: 24998, rowsNeedInsert: 1, rowsNeedUpdate: 1, took: 0:00:00.318

...