Senior Backend Developer
<b>Requirements:</b>
<ul><li>Extensive experience of Python preferred, including advanced concepts like decorators, protocols, functools, context managers, and comprehensions.</li><li>Strong understanding of SQL, database design, and data architecture.</li><li>Experience with Databricks and/or Spark.</li><li>Knowledgeable in data governance, data cataloguing, data quality principles, and related tools.</li><li>Skilled in data extraction, joining, and aggregation tasks, especially with big data and real-time data using Spark.</li><li>Capable of performing data cleansing operations to prepare data for analysis, including transforming data into useful formats.</li><li>Understand data storage concepts and logical data structures, such as data warehousing.</li><li>Able to write repeatable, production-quality code for data pipelines, utilizing templating and parameterization where needed.</li><li>Can make data pipeline design recommendations based on business requirements.</li><li>Experience with data migration is a plus.</li><li>Open to new ways of working and new technologies.</li><li>Self-motivated with the ability to set goals and take initiative.</li><li>Driven to troubleshoot, deconstruct problems, and build effective solutions.</li><li>Experience of Git / Version control.</li><li>Experience working with larger, legacy codebases.</li><li>Understanding of unit and integration testing.</li><li>Understanding and experience with CI/CD and general software development best practices.</li><li>A strong attention to detail and a curiosity about the data you will be working with.</li><li>A strong understanding of Linux based tooling and concepts.</li><li>Knowledge and experience of Amazon Web Services is essential.</li></ul>
<b>Responsibilities:</b>
<ul><li>Develop and maintain scalable, efficient data pipelines within Databricks, continuously evolving them as requirements and technologies change.</li><li>Build and manage an enterprise data model within Databricks.</li><li>Integrate new data sources into the platform using batch and streaming processes, adhering to SLAs.</li><li>Create and maintain documentation for data pipelines and associated systems, following security and monitoring protocols.</li><li>Ensure data quality and reliability processes are effective, maintaining trust in the data.</li><li>Take ownership of complex data engineering projects and develop appropriate solutions in accordance with business requirements.</li><li>Work closely with stakeholders and manage their requirements.</li><li>Actively coach and mentor others in the team and foster a culture of innovation and peer review within the team to ensure best practice.</li></ul>
<b>Technologies:</b>
<ul><li>Big Data</li><li>CI/CD</li><li>Databricks</li><li>Git</li><li>Support</li><li>Linux</li><li>Python</li><li>SQL</li><li>Security</li><li>Spark</li><li>Web</li><li>Backend</li></ul>
<p><b>More:</b></p>
<p>We are embarking on an ambitious data transformation journey using Databricks, guided by best practice data governance and architectural principles. As a major UK energy provider committed to 100% renewable energy and sustainability, we focus on delivering exceptional customer experiences. This is an initially 6-month contract with potential for extension, offering a hybrid role based in our Nottingham office one day a week every two weeks, though this is negotiable. We celebrate and support diversity and are committed to ensuring equal opportunities for all employees and applicants.</p>
<p>last updated 8 week of 2026</p>