Migrate to Databricks. End to end. On schedule.

Databricks migration done right — for EMEA enterprises that need a finished platform, not another stuck-at-80% project.

Scroll Down to Explore
Hero Background
Migrate
from anywhere

Pick your source. We've done it.

Oracle → Databricks

Oracle → Databricks

Snowflake → Databricks

Snowflake → Databricks

Synapse → Databricks

Synapse → Databricks

Teradata → Databricks

Teradata → Databricks
Our migration approach

A migration that ships, not one that drags.

Phase 1 — Discovery & Inventory

What happens → We catalogue every pipeline, table, report, and dependency. We map them to target Databricks objects. You get a one-document inventory with risk ratings.

Who owns it → Cosmos Thrace senior engineer plus your data lead.

Deliverable → Migration inventory plus risk register.

(2 to 4 weeks)

Phase 2 — Architecture design

What happens → We design the target Databricks architecture for your specific workload, not a generic template. Cost model included. Unity Catalog structure defined.

Who owns it → Cosmos Thrace architect plus your CTO or Head of Data.

Deliverable → Architecture decision document plus cost forecast.

(2 to 3 weeks)

Phase 3 — Migration execution

What happens → Your pipelines and tables move source by source. We run parallel for 2 to 4 weeks per data domain so consumers can validate. We don't cut over until you confirm parity.

Who owns it → Embedded Cosmos engineering team plus your data engineering team.

Deliverable → Each data domain live on Databricks with parity confirmed.

(8 to 24 weeks depending on scale)

Phase 4 — Cutover & enablement

What happens → Final cutover per domain. BI tooling repointed. Knowledge transfer to your team. We stay one month post-cutover for issue triage.

Who owns it → Cosmos Thrace plus your team.

Deliverable → Production Databricks environment fully owned by your team.

(2 to 4 weeks)

The Lakebridge angle

Lakebridge: the migration tool we know inside out.

Lakebridge is Databricks' native migration tool. It's powerful for the right kind of migration and oversold for the wrong kind.

Where Lakebridge helps.

SQL-based source platforms (Snowflake, Synapse, Teradata) where the migration is mostly query rewriting and schema mapping. It accelerates work that would otherwise be tedious.

Where Lakebridge struggles.

Heavy custom ETL code, stored procedures with business logic baked in, non-SQL sources. Lakebridge can scaffold the start but the last 30% of the migration is engineering, not tooling.

Our approach.

We use Lakebridge where it earns its place and write the rest by hand. The result is a migration that finishes, instead of one that's "85% Lakebridge'd" and stuck.

A test for any vendor proposal you receive.

If the quote leans on Lakebridge as the migration strategy, ask what happens to the 30% of code the tool can't translate. The answer tells you whether the migration will actually ship.

Book a 30-minute call and tell us about your migration plans.

It is a completely free migration assessment. We will go through the most important phases for each migration and check how well-prepared you are for it.

Cosmos Thrace
Source
platforms

Going from your current platform to Databricks has never been easier.

from Oracle
to databricks

Oracle → Databricks

What's hard about it

Oracle PL/SQL stored procedures don't translate cleanly. Many enterprises have decades of business logic embedded in PL/SQL that nobody documented. The "lift-and-shift" promise breaks the moment you hit a procedure that calls another procedure that calls a job scheduler that nobody understands.

How we handle it

PL/SQL inventory in Phase 1. Procedures triaged by business value: rewrite the critical ones in PySpark, deprecate the obsolete ones, document the unclear ones with stakeholder input. We don't promise a tool can do this. A senior engineer does.

A specific outcome

[CASE-PLACEHOLDER] We migrated 47 Oracle data marts for a Benelux retailer. 1,200 stored procedures inventoried. 380 rewritten, 720 deprecated, 100 archived as historical reference. 18-week execution. Quarterly close went from 9 days to 2.

from Snowflake
to databricks

Snowflake → Databricks

What's hard about it.

Snowflake's syntax is closer to Databricks SQL than most sources, so the SQL itself usually translates. The hard part is what's around the SQL: tasks, streams, semi-structured handling, the Snowflake-specific features (zero-copy clones, Time Travel) that have Databricks equivalents but different mechanics.

How we handle it.

SQL bulk-translates via Lakebridge. Tasks and streams rebuild as Delta Live Tables or Workflows. Time Travel maps to Delta time travel with retention policy alignment. Semi-structured columns map to STRUCT or VARIANT in Delta.

A specific outcome.

[CASE-PLACEHOLDER] A financial services client running on Snowflake for 4 years migrated to Databricks for the ML side and kept Snowflake for pure BI. Coexistence pattern, not full replacement. Combined platform cost dropped meaningfully and ML training time improved several fold.

from Synapse
to databricks

Azure Synapse → Databricks

What's hard about it.

Synapse Dedicated Pools and Spark Pools are two different products with one name. Synapse Pipelines depend on ADF orchestration. The "what gets migrated" varies wildly per setup.

How we handle it.

Storage stays. Synapse uses ADLS2 already, so Databricks connects to your existing Bronze/Silver/Gold containers via Access Connectors and Unity Catalog External Locations. Compute migrates: Dedicated Pools become Databricks SQL Warehouses, Synapse Spark Pools become Databricks job clusters. ADF orchestration lifts to Databricks Workflows.

A specific outcome.

[CASE-PLACEHOLDER] A manufacturing client with 3 Synapse environments (dev/test/prod) migrated to a single Databricks workspace with 3 Unity Catalogs in 12 weeks. Storage retained, ADF jobs converted. Compute spend dropped substantially.

from Teradata
to databricks

Teradata → Databricks

What's hard about it.

Teradata workloads are usually decades-old, business-critical, and supported by Teradata-specific extensions (BTEQ, FastLoad, MultiLoad). Vendor lock-in is severe and the people who originally built the system are often retired.

How we handle it.

The pattern: stabilise the Teradata system first, document its actual behaviour, then migrate workload by workload to Databricks. We don't recommend "big bang" Teradata migrations. We don't believe in them.

A specific outcome.

[CASE-PLACEHOLDER] An insurance client with 25 years of Teradata workloads migrated 60% of their critical pipelines in a 9-month engagement. The remaining 40% was deprecated rather than migrated, after Phase 1 discovered those workloads were no longer used.

Deliverables

What you get

→ Architecture decision document (Phase 2 deliverable, yours to keep)

→ Inventory plus risk assessment (Phase 1 deliverable)

→ Phased migration plan with go/no-go gates

→ Working Databricks plus Unity Catalog environment

→ Data lineage plus governance setup

→ Knowledge transfer to your team, with documentation written for them, not for us

Book a 30-minute call and tell us about your migration plans.

It is a completely free migration assessment. We will go through the most important phases for each migration and check how well-prepared you are for it.

Cosmos Thrace
FAQ

Frequently Asked Questions

How long does a typical migration take?
What about downtime?
Can you migrate just part of our stack?
What happens to our existing pipelines?
Do you do post-migration support?
How is migration pricing structured?