Sign In
 [New User? Sign Up]
Mobile Version

Sr. Data Engineer, Scala/Python Spark, AWS

Capital One


Location:
McLean, VA
Date:
09/18/2017
2017-09-182017-10-17
Job Code:
capitalone2-R31124
Categories:
  • Engineering
  •  
  • Save Ad
  • Email Friend
  • Print
  • Research Salary

Job Details

Company Capital One

Job Title: Sr. Data Engineer, Scala/Python Spark, AWS

JobID: capitalone2-R31124

Location: McLean, VA, 22106, USA

Description: McLean 1 (19050), United States of America, McLean, Virginia



At Capital One, we’re building a leading information-based technology company. Still founder-led by Chairman and Chief Executive Officer Richard Fairbank, Capital One is on a mission to help our customers succeed by bringing ingenuity, simplicity, and humanity to banking. We measure our efforts by the success our customers enjoy and the advocacy they exhibit. We are succeeding because they are succeeding.



Guided by our shared values, we thrive in an environment where collaboration and openness are valued. We believe that innovation is powered by perspective and that teamwork and respect for each other lead to superior results. We elevate each other and obsess about doing the right thing. Our associates serve with humility and a deep respect for their responsibility in helping our customers achieve their goals and realize their dreams. Together, we are on a quest to change banking for good.



Sr. Data Engineer, Scala/Python Spark, AWS



**Senior Data Engineer - Scala/Python Spark, AWS**



**The Role:** We are looking for driven individuals to join our team of passionate **data engineers** in creating Capital One’s next generation of data products and capabilities. You will be collaborating as part of a cross-functional **Agile** team to create and enhance software that enables state of the art, next generation **Big Data &** **Fast Data** **applications**



- You will develop and deploy distributed computing **Big Data** applications using **Open Source frameworks** like **Apache Spark, Apex, Flink, Storm, Akka and Kafka on AWS Cloud**



- You will utilize programming languages like **Java, Scala, Python** and **Open Source RDBMS** and **NoSQL** databases like **PostgreSQL, Redshift**



- You will build **data APIs** and data delivery services that support critical operational and analytical applications for our internal business operations, customers and partners



- You will transform complex analytical models into scalable, production-ready solutions



- You will continuously integrate and ship code into our on premise and cloud Production environments



- You will work directly with Product Owners and customers to deliver data products in a collaborative and agile environment



­



**Additional Skills/Experience:**



-Experience with recognized industry patterns, methodologies, and techniques



-Familiarity with Agile engineering practices



-2+ years of experience with Cloud computing (AWS a plus)



-4+ years' experience with Relational Database Systems and SQL (PostgreSQL and Redshift a plus)



-4+ years **of ETL design** , **development** and **implementation** experience



**Who you are:**



- You yearn to be part of cutting edge, high profile projects and are motivated by delivering world-class solutions on an aggressive schedule



- Someone who is not intimidated by challenges; thrives even under pressure; is passionate about their craft; and hyper focused on delivering exceptional results



- You love to learn new technologies and mentor junior engineers to raise the bar on your team



- It would be awesome if you have a robust portfolio on Github and / or open source contributions you are proud to share



- Passionate about intuitive and engaging user interfaces, as well as new/emerging concepts and techniques.



**What we have:**



- Flexible work schedules



- Convenient office locations



- Generous salary and merit-based pay incentives



- A startup mindset with the wallet of a top 10 bank



- Monthly innovation challenges dedicated to test driving cutting edge technologies



- Your choice of equipment (MacBook/PC, iPhone/Android Device)



\#ilovedata #bigdata



**Keywords:** Agile, Big Data, Fast Data, Open Source, Scala, Apache Spark, Apex, Flink, Storm, Akka and Kafka on AWS Cloud, Hadoop Stack, Distributed Computing, NoSQL, Relational Database Systems, SQL (PostgreSQL and Redshift), ETL



**Basic Qualifications:**



- Bachelor’s Degree or military experience



- At least 3 years of professional work experience coding in data management



**Preferred Qualifications:**



-Master's Degree



-2+ years of experience with the **Hadoop Stack**



-2+ years of **Distributed Computing** frameworks experience



-2+ years of experience with Cloud computing



-2+ years of **NoSQL** implementation experience



-4+ years Java development experience



-4+ years of scripting experience



-4+ years' experience with **Relational** **Database Systems** and 4+ years' experience with **SQL**



-4+ years **of UNIX/Linux** experience



At this time, Capital One will not sponsor a new applicant for employment authorization for this position.



At Capital One, we’re building a leading information-based technology company. Still founder-led by Chairman and Chief Executive Officer Richard Fairbank, Capital One is on a mission to help our customers succeed by bringing ingenuity, simplicity, and humanity to banking. We measure our efforts by the success our customers enjoy and the advocacy they exhibit. We are succeeding because they are succeeding.                                               


Powered By

Featured Employers

Featured Jobs

CareerConnection Video