Solvd, Inc. Logo

Solvd, Inc.

Data Engineer (Databricks)

Posted 7 Days Ago
Remote
5 Locations
Mid level
Remote
5 Locations
Mid level
Develop AI-powered data mapping recommendation platform, automate data extraction and validation, build scalable data pipelines, and manage data governance.
The summary above was generated by AI

Solvd Inc. is a rapidly growing AI-native consulting and technology services firm delivering enterprise transformation across cloud, data, software engineering, and artificial intelligence. We work with industry-leading organizations to design, build, and operationalize technology solutions that drive measurable business outcomes.

Following the acquisition of Tooploox, a premier AI and product development company, Solvd now offers true end-to-end delivery—from strategic advisory and solution design to custom AI development and enterprise-scale implementation. Our capability centers combine deep technical expertise, proven delivery methodologies, and sector-specific knowledge to address complex business challenges quickly and effectively.

We are looking for a Data Engineer to develop an AI-powered data mapping recommendation platform to speed up the integration and validation of complex datasets. The system will automate data extraction, mapping, and validation processes.

What you'll do
  • Build and maintain scalable data pipelines with Databricks, Spark, and PySpark.

  • Manage data governance, security, and credentials using Unity Catalog and Secret Scopes.

  • Develop and deploy ML models with MLflow; work with LLMs and embedding-based vector search.

  • Apply ML/DL techniques (classification, regression, clustering, transformers) and evaluate using industry metrics.

  • Design data models and warehouses leveraging dbt, Delta Lake, and Medallion architecture.

  • Work with healthcare data standards and medical terminology mapping.

What you bring

Databricks expertise

Hands-on experience with the Databricks platform, including:

  • Unity Catalog: Managing data governance, access control, and auditing across workspaces.

  • Secret Scopes: Secure handling of credentials and sensitive configurations.

  • Apache Spark / PySpark: Writing performant, scalable distributed data pipelines.

  • MLflow: Managing ML lifecycle including experiment tracking, model registry, and deployment.

  • Vector Search: Working with vector databases or search APIs to build embedding-based retrieval systems.

  • LLMs (Large Language Models): Familiarity with using or fine-tuning LLMs in Databricks or similar environments.

Data Engineering skills

Experience designing and maintaining robust data pipelines:

  • Data Modeling & Warehousing: Dimensional modeling, star/snowflake schemas, SCD (Slowly Changing Dimensions).

  • Modern Data Stack: Familiarity with dbt, Delta Lake, and the Medallion architecture (Bronze, Silver, Gold layers).

Nice to have

Machine Learning knowledge

Strong foundation in machine learning is expected, including:

  • Traditional Machine Learning Techniques: Classification, regression, clustering, etc.

  • Model Evaluation & Metrics: Precision, recall, F1-score, ROC-AUC, etc.

  • Deep Learning (DL): Understanding of neural networks and relevant frameworks.

  • Transformers & Attention Mechanisms: Knowledge of modern NLP architectures and their applications.

Preferred domain knowledge

  • Experience with healthcare data standards and medical code systems such as eCQM, VSAC, RxNorm, LOINC, SNOMED, etc.

  • Understanding of medical terminology and how to map or normalize disparate coding systems.


Tech stack

Platforms & Tools: Databricks, Unity Catalog, Secret Scopes, MLflow

Languages & Frameworks: Python, PySpark, Apache Spark

Machine Learning & AI: Traditional ML techniques, Deep Learning, Transformers, Attention Mechanisms, LLMs

Search & Retrieval: Vector databases, embedding-based vector search

Data Engineering & Modeling: dbt, Delta Lake, Medallion architecture (Bronze/Silver/Gold), Dimensional modeling, Star/Snowflake schemas

Domain (Optional): Healthcare data standards (eCQM, VSAC, RxNorm, LOINC, SNOMED)

When you join Solvd, you'll…

  • Shape real-world AI-driven projects across key industries, working with clients from startup innovation to enterprise transformation.

  • Be part of a global team with equal opportunities for collaboration across continents and cultures.

  • Thrive in an inclusive environment that prioritizes continuous learning, innovation, and ethical AI standards.

Ready to make an impact?

If you're excited to build things that matter, champion responsible AI, and grow with some of the industry’s sharpest minds. Apply today and let’s innovate together.

Top Skills

Spark
Databricks
Dbt
Delta Lake
Llms
Mlflow
Pyspark
Python
Secret Scopes
Unity Catalog
Vector Databases

Similar Jobs

12 Days Ago
Remote
6 Locations
Senior level
Senior level
Information Technology • Software • Consulting
Lead design and delivery of Databricks-based Lakehouse solutions and end-to-end data engineering projects. Build and optimize ETL, streaming and batch pipelines, integrate cloud and on-prem systems, mentor team members, support pre-sales, and ensure governance, security, and platform performance.
Top Skills: Databricks,Python,Sql,Spark,Pyspark,Dbt,Mlflow,Delta Lake,Terraform,Azure,Aws,Gcp,Power Bi,Scala
12 Days Ago
Remote
Poland
Junior
Junior
Information Technology • Consulting
Build and maintain Python-based data pipelines on Databricks, collaborate with product and domain experts, debug Tier-2 system issues, create documentation and onboarding processes, write integration tests, manage code in GitLab, and improve data quality and self-service tooling.
Top Skills: Python,Databricks,Snowflake,Sql,Apache Spark,Pyspark,Azure,Pytest,Git,Gitlab,Docker,Kubernetes
18 Days Ago
In-Office or Remote
Warszawa, Mazowieckie, POL
Junior
Junior
Artificial Intelligence • Big Data • Computer Vision • Machine Learning • Consulting • Conversational AI • Generative AI
Build and maintain scalable batch and streaming data pipelines using Databricks, Spark, Airflow/Dagster and DBT; implement CI/CD and DevOps practices; support ML projects and Power BI reporting; translate business requirements into performant data solutions.
Top Skills: Databricks,Apache Spark,Pyspark,Airflow,Dagster,Dbt,Python,Azure,Ci/Cd,Terraform,Devops,Power Bi,Delta Live Tables

What you need to know about the Bristol Tech Scene

Along with Gloucester, Swindon and Bath, Bristol is part of the "Silicon Gorge" tech hub, a region in the U.K. renowned for its high-tech and research-driven industries, with a particular emphasis on sustainability and reducing environmental impact. As the European Green Capital, Bristol is home to 25,000 cleantech companies, including Baker Hughes and unicorn Ovo Energy. The city has committed to achieving net-zero emissions within the next decade.

Sign up now Access later

Create Free Account

Please log in or sign up to report this job.

Create Free Account