Description

This job will SFTP a file from an endpoint, unpack the contents as a delimited file, and sync with a SQL table.  You can use the SQL table to sync to a group, another table, subject source, use as an attribute resolver, etc.

Configure in UI

External system config for SFTP

Sample file

[mchyzer@flash ~]$ ls /home/mchyzer/someFile.txt
/home/mchyzer/someFile.txt
[mchyzer@flash ~]$ more someFile.txt 
NYUID|Application|NET_ID
N11234127|BRC|abc134
N11234127|GiveAVioletAward|abc134
N11234127|iLearn_Blatant|abc134
N11234127|Workday|abc134
N11234497|BRC|def245
N11234497|GiveAVioletAward|def245
N11234497|iCims|def245
N11234497|iLearn_Blatant|def245
[mchyzer@flash ~]$ 


Daemon output

SQL table for this example

DDL (mysql in this case but could be any database)

CREATE TABLE my_sftp_sync_table (
  nyuid varchar(100),
  application varchar(100),
  net_id varchar(100)
)


Configuration in config file (not needed if using wizard above)

#####################################################
## sftp delimited file and sync to SQL table
## "sftpToSqlJobId" is the key of the config, change that for your csv file job
#####################################################

# set this to enable the report
# {valueType: "class", readOnly: true, mustExtendClass: "edu.internet2.middleware.grouper.app.loader.OtherJobBase"}
# otherJob.sftpToSqlJobId.class = edu.internet2.middleware.grouper.app.sqlSync.GrouperSftpToSqlJob

# cron string
# {valueType: "cron", required: true}
# otherJob.sftpToSqlJobId.quartzCron = 

# sftp config id (from grouper.properties) if sftp'ing this file somewhere, otherwise blank
# https://spaces.at.internet2.edu/display/Grouper/Grouper+Sftp+files
# {valueType: "string", regex: "^otherJob\\.([^.]+)\\.sftpToSql\\.sftp\\.configId$", required: true}
# otherJob.sftpToSqlJobId.sftpToSql.sftp.configId = 

# remote file to sftp to if sftp'ing, e.g. /data01/whatever/MyFile.csv
# {valueType: "string", regex: "^otherJob\\.([^.]+)\\.sftpToSql\\.sftp\\.fileNameRemote$", required: true}
# otherJob.sftpToSqlJobId.sftpToSql.sftp.fileNameRemote = 

# if it should be an error if the remote file doesnt exist
# {valueType: "boolean", regex: "^otherJob\\.([^.]+)\\.sftpToSql\\.ignoreIfRemoteFileDoesNotExist$", defaultValue: "false"}
# otherJob.sftpToSqlJobId.sftpToSql.errorIfRemoteFileDoesNotExist =

# if the file should be deleted from the grouper daemon server after sending it
# {valueType: "boolean", regex: "^otherJob\\.([^.]+)\\.sftpToSql\\.deleteFile$", defaultValue: "false"}
# otherJob.sftpToSqlJobId.sftpToSql.deleteFile =

# database external system config id to hit, default to "grouper"
# {valueType: "string", regex: "^otherJob\\.([^.]+)\\.sftpToSql\\.database$"}
# otherJob.sftpToSqlJobId.sftpToSql.database = 

# table to sql to, e.g. some_table.  or you can qualify by schema: some_schema.another_table
# {valueType: "string", regex: "^otherJob\\.([^.]+)\\.sftpToSql\\.table$", required: true}
# otherJob.sftpToSqlJobId.sftpToSql.table = 

# comma separated columns to sync to, e.g. col1, col2, col3
# {valueType: "string", regex: "^otherJob\\.([^.]+)\\.sftpToSql\\.columns$", required: true}
# otherJob.sftpToSqlJobId.sftpToSql.columns = 

# comma separated primary key columns, e.g. col1
# {valueType: "string", regex: "^otherJob\\.([^.]+)\\.sftpToSql\\.columnsPrimaryKey$", required: true}
# otherJob.sftpToSqlJobId.sftpToSql.columnsPrimaryKey = 

# if there is a header row
# {valueType: "boolean", regex: "^otherJob\\.([^.]+)\\.sftpToSql\\.hasHeaderRow$", defaultValue: "false"}
# otherJob.sftpToSqlJobId.sftpToSql.hasHeaderRow = 

# separator in file
# {valueType: "string", regex: "^otherJob\\.([^.]+)\\.sftpToSql\\.separator$", required: true}
# otherJob.sftpToSqlJobId.sftpToSql.separator = 

# escaped separator (cannot contain separator)
# {valueType: "string", regex: "^otherJob\\.([^.]+)\\.sftpToSql\\.escapedSeparator$"}
# otherJob.sftpToSqlJobId.sftpToSql.escapedSeparator =