Location: Gibraltar
Role: Full Time
Competitive salary.
Comprehensive benefits package.
Opportunity to work with a leading company in the online betting and gaming industry.
Professional development and growth opportunities.
A dynamic and collaborative work environment.
Pragmatic Solutions provides the core infrastructure for operators in the iGaming space. Our modular, PAM platform powers real-time experiences for thousands of users — and our lakehouse data platform transforms that activity into clean, queryable insights.
We’re looking for a mid-level Data Engineer to help us scale and evolve our client-facing data lake. In this role, your primary focus will be ingesting and modelling transactional data from production MySQL instances, transforming it into efficient, governed data products that support BI, compliance, and operational needs.
You’ll own ingestion pipelines, optimise Redshift performance, implement quality controls, and ensure the lakehouse platform delivers reliably at scale, all with minimal supervision.
Responsibilities:
Ingesting and modelling production data: Build batch and streaming pipelines that extract high-volume transactional data from MySQL 5.7/8.0 production environments. Design schemas that accurately reflect operator activity across sportsbook, casino, risk, KYC and bonus systems.
Building robust lakehouse pipelines: Use Apache Iceberg on S3 to manage raw, staged, and curated layers. Implement schema evolution, compaction, and time-travel to support data traceability and rollback.
Exposing data in Redshift for BI: Model and publish data products in Redshift, ensuring optimal distribution key and sort key configurations to support large-scale BI workloads efficiently.
Performance optimisation: Design intelligent partitioning strategies, monitor query latency, and implement compaction to keep costs and response times under control.
Governance & quality: Validate ingested data with automated tests, constraints, and lineage tracking (e.g., dbt, Airflow etc). Ensure changes to data models are safe and auditable.
Observability & reliability: Deploy monitoring and alerting for all critical pipelines. You’ll maintain what you build, ensuring it runs smoothly in production and meets SLAs.
Documentation & enablement: Produce clear technical documentation and SQL usage examples to empower internal teams and external operators to self-serve safely.
Skills and Experience:
3–5 years of experience in data engineering roles (cloud-native).
Strong SQL and Python skills, with clean, testable code practices.
Experience with ingesting from relational sources, especially MySQL.
Familiarity with Apache Iceberg, Parquet, and object storage (S3).
Proficiency in query tuning, partitioning, and Redshift distribution key design.
Hands-on experience with CI/CD pipelines, Terraform, and observability tooling (e.g., Grafana, Datadog).
A proactive, delivery-focused mindset with the ability to work independently.
Bonus Points:
Experience with Apache Flink or Spark for high-throughput stream processing.
Prior work in domains like iGaming, fintech, or ad-tech.
Familiarity with dbt, Great Expectations, Lake Formation, or Glue Catalog.
Why Join Us:
Make an immediate impact, your pipelines will power dashboards used by operators daily.
Work with a modern stack: Redshift, Iceberg, Flink, Airflow, dbt, Spark, Terraform.
Competitive compensation, learning budget, high-spec hardware, and growth paths.
Join the Team
Please complete the form below to apply
Other open positions