
Hybrid Models in Workday Data Flows
| Most Workday environments are not purely cloud. They connect to on-premise ERPs, legacy HRIS systems, local payroll processors, and data warehouses that live behind a firewall. This article covers what hybrid data flows actually look like in practice: the patterns, the trade-offs, and where things go wrong. |
What Hybrid Actually Means in a Workday Context
The word hybrid gets used loosely. In the context of Workday data flows, it has a specific meaning: you have Workday running as your system of record in the cloud, and you have one or more systems (either on-premise or in a different cloud environment) that need to send data to or receive data from Workday on a recurring basis.
That could mean a legacy SAP instance sending general ledger postings to Workday Financials. It could mean a custom Java payroll engine sitting on-premise that receives employee master data from Workday every night. It could mean a benefits carrier that only accepts files via SFTP and cannot talk to a REST API. It could mean a data warehouse in AWS that needs worker records for analytics, but your security team will not allow direct Workday API access from outside the corporate network.
In all these cases, the data needs to move between Workday and systems that are not in the same network, security model, or data format. How you architect that movement determines whether your integration team spends their time building and improving, or firefighting.
| Integration Topology | What It Means in Practice |
| Cloud-to-Cloud | Workday and the target system both have public APIs. Data flows directly via REST or SOAP calls, usually orchestrated by an iPaaS. No middleware on-premise required. |
| Cloud-to-On-Premise | Workday pushes or pulls data to/from a system behind a corporate firewall. Requires an agent, VPN tunnel, or SFTP staging layer. The most common hybrid pattern. |
| On-Premise-to-Cloud | An on-premise system (SAP, Oracle EBS, legacy HRIS) writes data that Workday needs to consume. The data must be extracted, transformed, and delivered to Workday’s inbound interface. |
| Multi-Cloud | Workday is cloud, the target is also cloud but a different provider (AWS, Azure, GCP). Network routing is simpler than on-premise, but identity federation and data residency rules may still apply. |
| Bidirectional Hybrid | Data flows both ways, often with different frequencies. For example, Workday sends worker records to SAP daily, and SAP sends cost centre assignments back to Workday weekly. Conflict resolution and record ownership must be defined explicitly. |
The Three Core Hybrid Patterns
Most hybrid Workday data flows fall into one of three architectural patterns. These are not mutually exclusive; a single enterprise may use all three for different integrations.
Pattern 1: File-Based Staging (Extract, Stage, Load)
The oldest and most widely used pattern. Workday (or the source system) generates a flat file (typically CSV, fixed-width, or XML), deposits it on an SFTP server, and the receiving system picks it up on a schedule. This is how most payroll processors, benefits carriers, and legacy ERP systems expect to receive data.
The staging server is the key component. It acts as the boundary point between the cloud and on-premise environments: Workday can push to it from the cloud side, and the on-premise system can pull from it without any inbound firewall rules.
| # Typical file-based hybrid flow topology:Workday (cloud) └─► EIB / Workday Report-as-a-Service └─► SFTP Staging Server (DMZ or cloud-hosted) └─► On-Premise Consumer (scheduled pull via cron / ETL job) └─► Target System (SAP / Oracle / Legacy HRIS)# Reverse direction (on-premise to Workday): On-Premise Source System |
Where this pattern works well: payroll interfaces, benefits carrier feeds, regulatory reporting outputs, systems that do not have APIs, and any integration where the receiving team controls only a file delivery spec and nothing else.
Where it breaks down: when data must flow faster than the file schedule allows, when partial failures need to be isolated at the record level (a failed SFTP delivery fails the entire batch), and when the file format does not carry enough metadata to track which records were processed versus rejected at the destination.
Pattern 2: Middleware-Brokered Real-Time (Event-Driven)
An iPaaS platform (Jitterbit, MuleSoft, Boomi, Azure Integration Services) sits between Workday and the target system. When something changes in Workday (a new hire, a job change, a compensation update), it triggers an outbound event that the middleware platform captures, transforms, and delivers to the target system in near real-time.
Workday supports this pattern through two outbound mechanisms: Workday Studio with REST/SOAP callouts (the integration calls an external endpoint directly on completion or on a schedule) and Workday Integration Cloud with event triggers (business process events such as hire complete or termination approved trigger an integration run automatically).
| # Middleware-brokered event-driven hybrid flow:Workday Business Process (e.g., Hire Complete) └─► Workday Integration Cloud / Studio trigger └─► Outbound REST/SOAP call to iPaaS endpoint └─► iPaaS Middleware (Jitterbit / MuleSoft / Boomi) ├─► Transform payload to target format ├─► Route to correct target based on org/region/system └─► Deliver to On-Premise Agent └─► On-Premise System API or DB insert# For on-premise targets, the middleware agent runs inside the network: iPaaS Cloud Engine ◄──── Secure tunnel ────► On-Premise Agent (lightweight process) └─► Local system API call (no inbound port required) |
The on-premise agent is what makes this work without opening inbound firewall ports. Jitterbit’s Harmony Agent, MuleSoft’s Runtime Engine, and Boomi’s Atom all work on an outbound-only polling model. The agent inside the network initiates a connection to the cloud platform, pulls pending work, executes it locally, and posts results back. From the firewall’s perspective, all traffic is outbound from the agent.
Where this pattern works well: HR provisioning workflows that need to create AD/Entra accounts within minutes of a hire being approved, payroll cutoff notifications, and benefits eligibility updates that must reach a carrier API before end of business.
Where it breaks down: when the middleware platform itself goes down (your integration reliability is now tied to two SLAs instead of one), and when the Workday event payload does not include all the data the target system needs, forcing the middleware to make additional Workday API calls to enrich the payload, which adds latency and failure surface.
Ready to Architect Hybrid Workday Data Flows That Bridge Cloud and On-Premise Systems?
From hybrid integration patterns and middleware orchestration to real-time and batch data synchronisation across Workday and legacy systems, Sama Integrations designs data flow architectures that are flexible, scalable, and built for complex enterprise environments. Let's map out your hybrid integration strategy.
Pattern 3: API Gateway / Reverse Proxy (Bidirectional)
For organisations that need on-premise systems to initiate calls to Workday (not just receive data), the reverse proxy or API gateway pattern solves the network routing problem cleanly. An API gateway component sits in the DMZ or in a cloud network layer, receives calls from on-premise systems, validates and forwards them to Workday’s API, and returns the response.
This avoids putting Workday API credentials on every on-premise system that needs to call Workday. The gateway holds the credentials and the routing logic. On-premise systems authenticate to the gateway using their own internal identity.
| # API Gateway hybrid pattern:On-Premise System (e.g., SAP, Oracle, custom app) └─► Internal API call to gateway endpoint (HTTP/HTTPS, internal only) └─► API Gateway (DMZ / cloud edge) ├─► Authenticate caller (internal cert or API key) ├─► Map to Workday API operation ├─► Attach Workday WS-Security / OAuth token └─► Forward to Workday API └─► Response mapped back to on-premise format └─► Return to calling system# Token management at the gateway (not on individual on-premise systems): Gateway token store: – Holds Workday OAuth2 client_id and client_secret – Issues and refreshes access tokens on a schedule – No Workday credentials stored on any on-premise host |
Where this pattern works well: environments with many on-premise systems that all need Workday access, security-conscious organisations that need to audit and rate-limit API traffic to Workday, and multi-region deployments where different on-premise sites call the same Workday tenant.
Where it breaks down: the gateway becomes a single point of failure if not deployed with high availability. If the gateway is also doing significant transformation work, it can become a maintenance burden, as business logic that should live in the integration platform creeps into gateway config over time.
| Which Pattern to Choose
Most enterprises use all three. File-based staging for carrier feeds and regulatory outputs, middleware-brokered for provisioning and real-time HR events, and API gateway for on-premise systems that need to query or write back to Workday on demand. The mistake is picking one pattern and trying to force every integration through it. |
Data Latency and Consistency in Hybrid Flows
One of the most underestimated design decisions in a hybrid Workday architecture is the latency model. Different parts of the business have fundamentally different requirements, and the architecture needs to serve them without over-engineering the low-stakes flows or under-engineering the critical ones.
| Data Flow Type | Typical Latency Tolerance | Right Architecture |
| New hire provisioning (AD/Entra, email, systems access) | Minutes; delay blocks the employee’s first day | Event-driven, middleware-brokered, real-time trigger from Workday business process |
| Payroll input files (earnings, deductions) | Hours; batch window overnight is acceptable | Scheduled EIB extract to SFTP, on-premise payroll engine picks up at cutoff |
| Benefits carrier eligibility feed | 24 hours; daily batch is standard | File-based EIB export, SFTP delivery to carrier, acknowledgement file returned |
| GL journal entries to finance ERP | Hours; same-business-day is typically the requirement | Workday Financial Management output to middleware, transformed and posted to ERP |
| Worker data to analytics warehouse | Hours to days; depends on reporting cadence | RaaS extract or EIB to staging, ETL pipeline loads into data warehouse |
| Cost centre hierarchy sync (ERP to Workday) | Daily; org structure changes are planned, not real-time | On-premise extract, SFTP or EIB inbound load, validation before apply |
| Termination to badge/access revocation | Under 15 minutes; security requirement | Real-time event trigger, direct API call to physical access system via middleware agent |
Latency tolerance is a business requirement, not a technical preference. The architecture should be calibrated to match it. Running a real-time middleware stack for a weekly report output is wasted complexity. Running a daily batch file for a termination-to-access-revocation flow is a security risk.
Handling Identity and Reference Data Across Boundaries
The most common cause of hybrid integration failures is not network connectivity, authentication, or file formatting. It is reference data mismatch. Workday’s data model uses reference IDs for almost everything: employee IDs, position IDs, cost centre codes, job profile codes, and organisation hierarchy IDs. When these references do not match what the on-premise system uses, the integration fails.
The Cross-Reference Table Problem
On-premise legacy systems typically use their own internal identifiers. SAP uses PERNR for personnel numbers. Oracle uses assignment numbers. An old HRIS might use a combination of employee number and hire date as the primary key. None of these map directly to Workday’s Employee_ID or Worker_Reference.
The right solution is a cross-reference table: a mapping store that maintains the correspondence between each system’s identifier for the same entity. This table needs to be treated as a first-class data asset, not a side effect of the integration.
| — Example cross-reference table schema (simplified) CREATE TABLE integration_xref ( xref_id BIGINT PRIMARY KEY AUTO_INCREMENT, entity_type VARCHAR(50) NOT NULL, — ‘WORKER’, ‘COST_CENTER’, ‘POSITION’ workday_id VARCHAR(100) NOT NULL, — Workday reference ID workday_id_type VARCHAR(100) NOT NULL, — ‘Employee_ID’, ‘Cost_Center_Reference_ID’ source_system VARCHAR(50) NOT NULL, — ‘SAP’, ‘ORACLE’, ‘LEGACY_HRIS’ source_id VARCHAR(100) NOT NULL, — The source system’s identifier effective_from DATE NOT NULL, effective_to DATE, — NULL = current record created_at TIMESTAMP DEFAULT CURRENT_TIMESTAMP, updated_at TIMESTAMP ON UPDATE CURRENT_TIMESTAMP,UNIQUE KEY uq_workday (entity_type, workday_id, source_system), UNIQUE KEY uq_source (entity_type, source_id, source_system), INDEX idx_entity_system (entity_type, source_system) );— Lookup: what is this SAP PERNR in Workday? SELECT workday_id, workday_id_type FROM integration_xref WHERE entity_type = ‘WORKER’ AND source_system = ‘SAP’ AND source_id = ‘00123456’ AND effective_to IS NULL; — Lookup: what is this Workday Employee_ID in SAP? |
The cross-reference table lives in your middleware layer, not in Workday and not in the on-premise system. It is maintained by the integration layer, populated when new workers are created in Workday and confirmed against the on-premise system’s response to the initial data load.
Reference Data That Changes Independently
Cost centres, job profiles, and organisation hierarchies can change in Workday without any corresponding change in the on-premise system, and vice versa. When Workday reorganises a division and renames cost centres, every integration that passes cost centre codes needs to pick up the new mapping. If your cross-reference table is not updated, the on-premise system receives unrecognised codes and either rejects them or silently loads them against the wrong entity.
The mitigation is to treat reference data changes as integration events. When a cost centre is renamed or an organisation is restructured in Workday, the integration layer should receive a notification (via a scheduled comparison report or an event-triggered integration) and flag the affected cross-reference entries for review before the next data flow runs.
Ready to Architect Hybrid Workday Data Flows That Bridge Cloud and On-Premise Systems?
From hybrid integration patterns and middleware orchestration to real-time and batch data synchronisation across Workday and legacy systems, Sama Integrations designs data flow architectures that are flexible, scalable, and built for complex enterprise environments. Let's map out your hybrid integration strategy.
Security Architecture for Hybrid Data Flows
Getting data from Workday to an on-premise system and back requires credentials, network paths, and data handling to be explicitly designed. The defaults (putting the Workday ISU password in a config file on the on-premise server, opening inbound HTTPS to Workday from the DMZ, storing extracted files unencrypted on the SFTP server) all fail a basic security review.
Credential Management
Workday API credentials (ISU username/password or OAuth client secret) should never sit on an on-premise file system in plaintext. The correct pattern depends on your environment:
- HashiCorp Vault or AWS Secrets Manager: the integration agent retrieves credentials at runtime from the secrets store. Rotation happens in the secrets store without touching the integration code.
- Azure Key Vault / GCP Secret Manager: same principle, cloud-provider-native. Works well when your middleware is hosted in the same cloud.
- Encrypted credential stores in iPaaS platforms: Jitterbit, MuleSoft, and Boomi all have built-in credential management that encrypts secrets at rest and injects them into integration flows at runtime without exposing them in configuration files.
Network Path Security
For file-based patterns, the SFTP server should be hosted in a DMZ or cloud environment with TLS/SFTP (SFTP over SSH, not FTP over SSL). Workday can push to SFTP directly. The on-premise consumer should pull from SFTP using an outbound-only connection, with no inbound rules required on the corporate firewall.
For API-based patterns, all calls to Workday’s API go over HTTPS to Workday’s public endpoints. Calls from on-premise systems to your middleware gateway should use mutual TLS (mTLS) or a VPN tunnel to avoid exposing the gateway endpoint publicly.
Data at Rest on the Staging Layer
Files deposited on SFTP staging servers contain personal data: names, addresses, salary information, and health benefit elections. These files need to be encrypted at rest (PGP encryption before upload is standard for benefits carriers), and they need a retention policy. A common mistake is leaving extracted files on the SFTP server indefinitely. They should be deleted after confirmed consumption by the downstream system, with a maximum retention window of no more than 48 hours for anything containing sensitive employee data.
| # PGP encryption for outbound file delivery (command-line reference) # Encrypt file using recipient’s public key before SFTP uploadgpg –encrypt \ –recipient benefits-carrier@acmebenefits.com \ –output employee_eligibility_20250305.csv.gpg \ employee_eligibility_20250305.csv# Verify the encrypted file before upload gpg –list-packets employee_eligibility_20250305.csv.gpg # Sign and encrypt (preferable for carrier feeds; proves file origin) |
Monitoring Hybrid Flows: What to Watch
A hybrid integration that runs without errors for three weeks and then fails silently on week four (writing partial data to the on-premise system without triggering any alert) is more dangerous than one that fails loudly. Silent partial failures are the hardest problem in hybrid architectures.
Monitoring needs to cover both ends of the flow, not just the Workday side or just the on-premise side.
Flow-Level Monitoring (End-to-End)
For every integration flow, define a completion contract: what does a successful run look like? At minimum: record count reconciliation (the number of records extracted from Workday should match the number confirmed loaded by the on-premise system; if the SFTP file contained 2,000 rows and the on-premise system acknowledged 1,987, that discrepancy should trigger an alert, not a silent success); completion timestamp with SLA window (if a payroll feed is expected to complete by 23:00 and it has not completed by 23:15, someone should know before the payroll run starts); and acknowledgement files (many legacy systems and carriers return an acknowledgement file after processing an inbound feed; the integration should wait for and validate this file before marking the run complete).
What to Log on Every Run
| // Minimum run metadata to log for every hybrid integration execution:{ “run_id”: “hr-feed-2025-03-05-001”, “integration_name”: “Workday_to_SAP_Worker_Daily”, “run_start”: “2025-03-05T22:00:03Z”, “run_end”: “2025-03-05T22:07:41Z”, “status”: “SUCCESS”, // SUCCESS | PARTIAL | FAILED“source”: { “system”: “Workday”, “records_queried”: 8420, “records_extracted”: 8420, “errors”: 0 }, “transform”: { “destination”: { “rejected_records_file”: “rejections/hr-feed-2025-03-05-001-rejects.xml”, |
The rejected records file is critical. Every run that has any transformation or load rejection should write the rejected records (with their error reason) to a structured file that the operations team can action. Not to an email. Not just to a log. To a queryable file that tells you exactly which employees were not processed and why.
Alerting Thresholds
| Condition | Alert Level and Action |
| Integration run did not start within 5 minutes of scheduled time | Warning: check middleware agent connectivity and Workday availability |
| Run started but no completion signal after 2x expected duration | Critical: integration may be hung; check for large payload or network stall |
| Record rejection rate > 1% of total | Warning: likely data quality issue upstream; route to data owner |
| Record rejection rate > 5% of total | Critical: abort or quarantine run; do not load partial data to downstream system |
| SFTP acknowledgement file not received within 4 hours of delivery | Warning: follow up with receiving team; may indicate their processing failed |
| Cross-reference lookup failure (unknown ID) | Warning: new entity in Workday not yet mapped to on-premise ID; triggers xref update process |
| Consecutive run failure (2+ runs in a row) | Critical with escalation: page on-call integration engineer regardless of time |
Ready to Architect Hybrid Workday Data Flows That Bridge Cloud and On-Premise Systems?
From hybrid integration patterns and middleware orchestration to real-time and batch data synchronisation across Workday and legacy systems, Sama Integrations designs data flow architectures that are flexible, scalable, and built for complex enterprise environments. Let's map out your hybrid integration strategy.
Common Mistakes in Hybrid Workday Architecture
Most hybrid integration problems are not unique; they repeat across organisations because the same shortcuts get taken under time pressure. These are the ones we see most often.
Treating Workday as Both System of Record and Staging Layer
When on-premise systems write to Workday and then immediately read from it to get the result, they are using Workday as a messaging queue. Workday is not designed for this. It does not guarantee real-time read-after-write consistency for all data types, and heavy polling against Workday’s API counts against your API rate limits. If your on-premise system needs to confirm that data was applied, use an acknowledgement mechanism at the integration layer, not a polling loop against Workday.
No Defined Record Owner for Bidirectional Flows
When both Workday and an on-premise system can update the same data (say, cost centre assignments in both Workday and SAP) you need a defined rule for which system wins when both have been updated since the last sync. Without this rule, the integration will overwrite one system’s changes with the other’s on every run. The rule needs to be documented and agreed by the business before the integration goes live, not after the first data conflict.
SFTP as the Only Error Recovery Path
If your only way to recover from a failed file delivery is to manually regenerate and re-upload the file, your recovery time depends entirely on how quickly a human can respond. For non-critical daily feeds this is acceptable. For any feed that has a hard business deadline (payroll cutoff, benefits open enrolment, regulatory filing) you need an automated retry mechanism with configurable backoff, and a way to trigger a manual re-run with the same parameters as the failed run without re-running the Workday extraction step.
Schema Changes Going Undetected
Workday releases updates twice a year and can introduce changes to API response schemas, add new required fields, or deprecate old ones. On-premise systems also change; a SAP upgrade can alter the format of the inbound IDOC it expects. Without a schema validation step in the transformation layer, these changes silently corrupt data until someone notices that a field is blank or wrong.
The fix is to validate the structure of every inbound document against a known-good schema before any transformation runs. When validation fails, the run should abort with a clear message explaining what changed, not proceed with incorrect data.
A Practical Hybrid Architecture for a Mid-Size Enterprise
To make this concrete, here is how a mid-size enterprise with around 5,000 employees and a hybrid Workday plus SAP HCM environment would typically structure their data flows.
Workday is the system of record for all HR data: worker profiles, positions, compensation, benefits, and talent. SAP ECC remains the system of record for financials: cost centres, GL accounts, and vendor master. A benefits third-party administrator handles health and retirement plans. Active Directory / Microsoft Entra handles identity and access.
| # Data Flow Map: Mid-Size Hybrid EnterpriseWORKDAY (cloud HoR for HR) │ ├─► [Event-triggered, real-time] │ Hire / Termination / Job Change │ └─► Jitterbit Agent (cloud) ──► On-Premise Jitterbit Agent │ └─► Entra ID / AD │ (provision / deprovision user) │ ├─► [Nightly batch, file-based] │ EIB Worker Extract (all active employees) │ └─► SFTP Staging (cloud, PGP encrypted) │ └─► SAP ECC inbound IDOC processor (overnight) │ └─► PA30 / PA40 updates in SAP │ ├─► [Nightly batch, file-based] │ EIB Benefits Eligibility Extract │ └─► SFTP (PGP encrypted) ──► Benefits TPA SFTP │ └─► Acknowledgement file returned by 08:00 │ ├─► [Weekly, file-based] │ EIB Compensation Extract │ └─► SFTP ──► Analytics Data Warehouse pipeline │ └─► [Daily, API-based; reverse direction] SAP Cost Centre / GL Account Updates └─► SAP delta extract (on-premise) └─► Jitterbit On-Premise Agent └─► API Gateway (DMZ) └─► Workday Staffing / Financials API (update cost centre hierarchy in Workday) |
In this architecture, each flow has a different latency model matched to its business requirement. Identity provisioning is real-time because a new employee needs access on day one. Payroll input is nightly batch because the payroll cutoff is 08:00 the following morning. Benefits eligibility is nightly because carriers process files overnight. The cost centre sync from SAP is daily because org changes are planned, reviewed, and effective the following day.
No single pattern is used for everything. The architecture uses the right tool for each flow’s requirement rather than forcing all flows through one integration topology.
Performance Considerations for Large Volumes
Hybrid architectures that work fine at 1,000 employees often show problems at 10,000. The main pressure points are:
Workday API Rate Limits
Workday enforces per-tenant API rate limits. For REST APIs, the limit is typically 5,000 requests per hour per integration system user. If your hybrid architecture has multiple integration flows all running simultaneously and all making paginated API calls, they can collectively hit this limit. The symptom is HTTP 429 responses (Too Many Requests) that your code needs to handle with exponential backoff.
The mitigation is to stagger the scheduled start times of heavy extraction flows and to use Workday’s Report-as-a-Service (RaaS) for bulk data extraction rather than paginated API calls. A single RaaS call can return 50,000+ worker records in one response, whereas the equivalent Get_Workers paginated calls would require 50+ API requests.
Ready to Architect Hybrid Workday Data Flows That Bridge Cloud and On-Premise Systems?
From hybrid integration patterns and middleware orchestration to real-time and batch data synchronisation across Workday and legacy systems, Sama Integrations designs data flow architectures that are flexible, scalable, and built for complex enterprise environments. Let's map out your hybrid integration strategy.
File Size and SFTP Transfer Time
A daily delta extract is manageable. A full extract of all workers runs quickly for 5,000 employees, but for a 60,000-employee organisation, an uncompressed CSV with full worker data can be 500 MB or larger. At typical SFTP transfer speeds, that takes 10 to 20 minutes to transfer. If the on-premise processing job starts before the transfer completes, it reads a partial file.
Always use a file arrival sentinel or a manifest file pattern: the sender writes the data file first, then writes a zero-byte or checksum manifest file when the transfer is complete. The receiver waits for the manifest file before starting processing. Never start processing based on the presence of the data file alone.
| # Manifest file pattern: sender side # 1. Write data file scp employee_extract_20250305.csv.gpg sftp-server:/inbound/# 2. Write manifest file AFTER data file transfer completes echo “FILE=employee_extract_20250305.csv.gpg” > manifest_20250305.txt echo “RECORD_COUNT=8420” >> manifest_20250305.txt echo “SHA256=$(sha256sum employee_extract_20250305.csv.gpg | awk ‘{print $1}’)” >> manifest_20250305.txt scp manifest_20250305.txt sftp-server:/inbound/# Receiver side: poll for manifest, not for data file while [ ! -f /inbound/manifest_20250305.txt ]; do sleep 30 done # Validate checksum from manifest before processing if [ “$EXPECTED_HASH” != “$ACTUAL_HASH” ]; then # Proceed with decryption and processing |
| Designing or troubleshooting a hybrid Workday architecture?
If you are connecting Workday to on-premise systems and dealing with latency mismatches, reference ID conflicts, or silent partial failures, our Workday Integration Services team has built production hybrid architectures across all three patterns covered in this article. For existing hybrid flows with reliability issues (missed records, stale data in on-premise systems, or feeds that fail after every Workday release), our Support & Troubleshooting service includes a full integration health review that covers the end-to-end flow, not just the Workday side. For organisations evaluating whether to move on-premise integration middleware to a cloud-native iPaaS, our Integration Consulting practice provides architecture assessments that account for your full system landscape, not just Workday. |