Edelman Logo

Edelman

Senior Data Engineer

Posted Yesterday
Be an Early Applicant
Remote
3 Locations
Senior level
Remote
3 Locations
Senior level
The Senior Data Engineer will lead the design and evolution of scalable data architectures, mentor team members, build data pipelines, and integrate Generative AI into workflows.
The summary above was generated by AI

Edelman is a voice synonymous with trust, reimagining a future where the currency of communication is action. Our culture thrives on three promises: boldness is possibility, empathy is progress, and curiosity is momentum. 

At Edelman, we understand diversity, equity, inclusion and belonging (DEIB) transform our colleagues, our company, our clients, and our communities. We are in relentless pursuit of an equitable and inspiring workplace that is respectful of all, reflects and represents the world in which we live, and fosters trust, collaboration and belonging.

Senior Data Engineer

We’re seeking a Senior Data Engineer with strong technical leadership capability to help design, scale, and evolve Edelman’s data platforms. This role blends hands-on engineering with architectural ownership, close collaboration across teams, and the application of Generative AI to real production workflows.

You’ll serve as a technical anchor for data initiatives—owning complex pipelines, shaping platform direction, and guiding engineers through best practices in data, cloud, and AI-enabled systems. You’ll partner closely with the Data Engineering lead, focusing on technical direction and delivery.

 Why You'll Love Working with Us:At Edelman, we believe in fostering a collaborative and open environment where every team member’s voice is valued. Our data engineering team thrives on building robust, scalable, and efficient data systems to power insightful decision-making.

We are at an exciting point in our journey and you’ll work at the intersection of data, AI, and real-world business impact. Our data team is building modern platforms that enable insight, innovation, and responsible AI adoption across the organization. You’ll have the autonomy to shape solutions, the trust to lead technically, and the support to keep pushing the platform forward.

What You’ll Do

Platform & Architecture Leadership

  • Lead the design and evolution of scalable data architectures, supporting batch, streaming, and AI-driven workloads.

  • Own end-to-end data pipelines—from ingestion and transformation through to serving analytics and ML/GenAI use cases.

  • Define and enforce data engineering standards across modelling, orchestration, observability, and reliability.

Technical Leadership & Collaboration

  • Mentor and guide data engineers through code reviews, design discussions, and architectural decisions.

  • Translate business problems into scalable technical solutions, balancing speed, quality, and long-term maintainability.

  • Drive the use of agent-based solutions across the development lifecycle, designing autonomous and semi-autonomous workflows that deliver measurable business value.

  • Clearly document architectures and workflows to support shared understanding and operational excellence.

Data Engineering & Cloud Execution

  • Build and optimize data pipelines using Databricks, Spark (PySpark), Snowflake, Apache Airflow and Terraform.

  • Design performant data models and lakehouse structures (Delta, Unity Catalog) for analytics and downstream AI consumption.

  • Leverage AWS-native services (e.g. S3, EMR, DynamoDB) to deliver cost-efficient, production-grade solutions.

  • Implement robust data quality, testing, and monitoring (e.g. Great Expectations, logging, alerting).

Generative AI

  • Design data pipelines that power Generative AI applications, including data preparation, enrichment, and feature generation.

  • Integrate 3rd party APIs into data workflows for use cases such as:

    • Automated data enrichment and classification

    • Intelligent summarization and insight generation

    • Metadata generation and semantic search enablement

    • AI-assisted reporting and decision support

  • Collaborate with ML and Product teams on prompt design, evaluation, and governance, ensuring responsible and reliable AI usage.

  • Support production AI systems through data versioning, lineage, and lifecycle management.

What You Bring

  • 4+ years building and operating enterprise-scale data platforms, with ownership across the full lifecycle.

  • Strong hands-on experience with Databricks, Snowflake, Airflow, and distributed data processing.

  • Advanced Python and SQL, with production-quality engineering standards.

  • Proven experience designing and maintaining cloud-native data infrastructure on AWS.

  • Experience integrating Generative AI models (OpenAI, Claude or similar) into production data or analytics workflows.

  • Solid understanding of CI/CD, Infrastructure as Code, DevOps practices, and operating reliable data systems at scale.

  • Actively stay current on advances in code agents and automation, guiding their responsible adoption across the development lifecycle.

  • Exposure to streaming architectures (Kafka or equivalent) is advantageous.

  • A leadership mindset: proactive, pragmatic, and comfortable influencing technical direction.

  • Excellent communication skills and the ability to work effectively across disciplines.

 

We are dedicated to building a diverse, inclusive, and authentic workplace, so if you’re excited about this role but your experience doesn’t perfectly align with every qualification, we encourage you to apply anyway. You may be just the right candidate for this or other roles.
 

Top Skills

Apache Airflow
AWS
Ci/Cd
Databricks
DynamoDB
Emr
Pyspark
Python
S3
Snowflake
Spark
SQL
Terraform

Similar Jobs

4 Days Ago
In-Office or Remote
11 Locations
Senior level
Senior level
Fintech • Software • Financial Services
As a Senior Data Engineer at Pennylane, you will model data, design ETL pipelines, and contribute to data governance and analytics platforms to support decision-making across the company.
Top Skills: Data Analytics PlatformData GovernanceData ModelingData WarehousingETLSQL
4 Days Ago
Easy Apply
Remote
Italy
Easy Apply
Senior level
Senior level
Information Technology • Consulting
Design, build, and maintain scalable data platforms and pipelines using SQL, Python, and Spark while ensuring data quality and performance optimization.
Top Skills: AWSAzureBigQueryDatabricksGCPPythonRedshiftSnowflakeSparkSQL
5 Days Ago
In-Office or Remote
9 Locations
Senior level
Senior level
Payments
Design and build reliable, performant data pipelines for large payment datasets. Own end-to-end data flows, drive data quality, implement transformations (dbt), operate on StarRocks and Flink/Spark in cloud (AWS/GCP), collaborate cross-functionally, establish engineering standards, mentor engineers, and shape CI/CD and incident practices.
Top Skills: AirflowApache HudiAWSData Quality ToolingDbtFlinkGCPLakehouseMedallion ArchitecturePythonSparkSQLStarrocks

What you need to know about the Bristol Tech Scene

Along with Gloucester, Swindon and Bath, Bristol is part of the "Silicon Gorge" tech hub, a region in the U.K. renowned for its high-tech and research-driven industries, with a particular emphasis on sustainability and reducing environmental impact. As the European Green Capital, Bristol is home to 25,000 cleantech companies, including Baker Hughes and unicorn Ovo Energy. The city has committed to achieving net-zero emissions within the next decade.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account