HRMS Integration Services

HRMS Integration Services

Darwinbox

 

User Data flow:

 

Data sync approach

API based -Pull from provider

Frequency

Once a day

User unique identifier

email / employee id

activation/deactivation

Automated

Custom filters

Yes

Manager mapping

Yes

Deep link of Disprz

Custom SSO

Clients implemented

Ashok leyland, TODO

Timeline

2 to 3 days

Confluence link

 

 

 

  • Darwinbox provides APIs for user activation and deactivation.

  • The unique id can either be email or employee id. (employee id is preferred)

  • Any custom filtration can be applied on top of the obtained data.

Eg., Filter records and create learner from a specific grade and above

Filter records and create users based on the joining date

  • Manager mapping is also done automatically

  • Learners can launch the Disprz application from within Darwinbox and user seamlessly redirect to Disprz

  • Custom SSO URL will be shared to the Darwinbox team for configuration

 

Process:

  • Client defines UDF

  • Darwinbox team shares the API credentials on sharing the UDF

  • Customer success spoc to create JIRA to ES team along with the UDF

  • ES team configures and shares the SSO URL

  • Darwin box team configures the SSO URL

 

 

 

 

 

 

Content Flow

 

 

Content pushing

Pushed via Darwin APIs periodically

Learner Enrollment

Pushed via Darwin APIs -event based

Learner course Progress

Pushed via Darwin APIs -event based

Learner course Completion

Pushed via Darwin APIs -event based

Learner course assignment

NA now

Journey pushing

NA now

Skill /competency push

NA now

Confluence Link

 

 

 

  • Disprz content metadata is shared to Darwinbox via their APIs

  • Self-paced and Mooc content are pushed periodically to Darwinbox

  • Learner action events are triggered and the activity progress is pushed to Darwinbox

 

Plans:

  • Course Enrollment from Darwinbox

  • Disprz Journey on Darwinbox

  • Course assignment and journey assignment

  • Skill sharing to Disprz

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Success Factor:

 

 

 

User Data flow:

 

Data sync approach

Odata API based -Pull from provider/ SFTP based

Frequency

Once a day

User unique identifier

email / employee id

activation/deactivation

Automated

Custom filters

Yes

Manager mapping

Yes

Deep link of Disprz

Disprz API can be used if required

Clients implemented

Bajaj, TODO

Timeline

2 to 3 weeks

Confluence Link

 

 

 

  • Disprz uses, success factor ‘per person’ API for user data pull

  • User data synced on Disprz on daily basis

  • We have done SFTP and API based approach in the past

  • Deep link of Disprz platform can be embedded in SF using the Disprz external API

  • The API of SF varies at times on the basis of client. We request for ‘per person’ API

  • Full data dump is passed via SF. We need to ask for incremental data.

 

 

 

Oracle HCM -Fusion

 

User Data flow:

 

Data sync approach

API based -Pull from provider

Frequency

Once a day

User unique identifier

email / employee id

activation/deactivation

Automated

Custom filters

Yes

Manager mapping

Yes

Deep link of Disprz

Disprz API can be used if required

Clients implemented

Alshirawi, TODO

Timeline

2 to 3 weeks

Confluence Link

 

 

 

  • Spoc from Oracle fusion is needed to take up this integration

  • Its a multi API based approach

  • Employee and Worker APIs of Oracle fusion is used currently

  • Initial user dump can be obtained from the client 

  • They do not have provision to give the delta data. This needs to be explored

 

 

Workday 

 

User Data flow:

 

Data sync approach

API based -Pull from provider

Frequency

Once a day

User unique identifier

email / employee id

activation/deactivation

Automated

Custom filters

Yes

Manager mapping

Yes

Deep link of Disprz

Disprz API can be used if required

Clients implemented

GE Aviation, TODO

Timeline

2 to 3 weeks

Confluence Link

 

  • Multiple Workday APIs are exposed for consumption

  • For GE Aviation we implemented using the incremental search API

  • This can pull last 7 days data

  • One time dump of user data to be uploaded initially

  • Custom attributes to be requested to workday to obtain in the response

  • The API response is not real time data. The build runs at certain periods

  • For GE, we understood that weekends the workday build does not run and on Monday the data from Friday needs to be fetched.

 

 

MiHCM

 

User Data flow:

 

Data sync approach

API based -Pull from provider

Frequency

Once a day

User unique identifier

email / employee id

activation/deactivation

Automated

Custom filters

Yes

Manager mapping

Yes

Deep link of Disprz

Disprz API can be used if required

Clients implemented

The hour glass

Timeline

2 to 3 weeks

Confluence Link

 

 

  • MiHCM provides a portal access where the UAT and Prod APIs can be tried

  • They provide the complete user data and not incremental data

  • Instead they provide the last updated date for easy manipulation

  • They provide custom response based on each client

 

 

 

 

 

 

Beehive

 

User Data flow:

 

Data sync approach

API based -Pull from provider

Frequency

Once a day

User unique identifier

email / employee id

activation/deactivation

Automated

Custom filters

Yes

Manager mapping

Yes

Deep link of Disprz

Disprz API can be used if required

Clients implemented

I2E consulting

Timeline

2 to 3 weeks

Confluence Link

 

 

  • Beehive provides incremental data via their API

  • AP does not provide real time data. Their build runs at early morning and the previous days data can be pulled at 2AM UTC for I2E Consulting

  • One time sync of user data needs to obtained from the client

 

 

Bamboo HR

 

User Data flow:

 

Data sync approach

API based -Pull from provider

Frequency

Once a day

User unique identifier

email / employee id

activation/deactivation

Automated

Custom filters

Yes

Manager mapping

Yes

Deep link of Disprz

Disprz API can be used if required

Clients implemented

Palo IT

Timeline

2 to 3 weeks

Confluence Link

 

  • The APIs are identified and explored internally by our team

  • They have exposed multiple APIs. We can use the appropriate API that suits the requirement of the client.

  • We have used the Custom report API approach for Palo IT

  • The required params are dynamically passed to the API

 

 

Add label

Related content