Sign In
 [New User? Sign Up]
Mobile Version

DevOps Engineer - Big Data

Capital One


Location:
Vienna, VA
Date:
04/27/2017
Job Code:
capitalone2-R22665
Categories:
  • Engineering
  •  
  • Save Ad
  • Email Friend
  • Print
  • Research Salary

Job Details

Company Capital One

Job Title: DevOps Engineer - Big Data

JobID: capitalone2-R22665

Location: Vienna, VA, 22180, USA

Description: Towers Crescent (12066), United States of America, Vienna, Virginia



At Capital One, we’re building a leading information-based technology company. Still founder-led by Chairman and Chief Executive Officer Richard Fairbank, Capital One is on a mission to help our customers succeed by bringing ingenuity, simplicity, and humanity to banking. We measure our efforts by the success our customers enjoy and the advocacy they exhibit. We are succeeding because they are succeeding.



Guided by our shared values, we thrive in an environment where collaboration and openness are valued. We believe that innovation is powered by perspective and that teamwork and respect for each other lead to superior results. We elevate each other and obsess about doing the right thing. Our associates serve with humility and a deep respect for their responsibility in helping our customers achieve their goals and realize their dreams. Together, we are on a quest to change banking for good.



DevOps Engineer - Big Data



**DevOps Engineer- Big Data**



**Who We Are:**



Capital One is a technology company, a research laboratory, and a nationally recognized brand with over 65 million customers. We offer a broad spectrum of financial products and services to consumers, small businesses and commercial clients - and data is at the center of everything we do. Capital One was named one of Fortune’s 2017 100 Best Companies to Work For the 11th time consecutively! Come learn more about the great opportunities we have to offer!



**The Role:**



We are looking for driven individuals to join our team of passionate data engineers in creating Capital One’s next generation of data products and capabilities.



- You will build data pipeline frameworks to automate high-volume and real-time data delivery for our Hadoop and streaming data hub



- You will build data APIs and data delivery services that support critical operational and analytical applications for our internal business operations, customers and partners



- You will transform complex analytical models into scalable, production-ready solutions



- You will continuously integrate and ship code into our on premise and cloud Production environments



- You will develop applications from ground up using a modern technology stack such as Scala, Spark, Postgres, Angular JS, and NoSQL



- You will work directly with Product Owners and customers to deliver data products in a collaborative and agile environment



**Responsibilities:**



- Develop sustainable data driven solutions with current new gen data technologies to meet the needs of our organization and business Customers



- Ability to grasp new technologies rapidly as needed to progress varied initiatives



- Break down data issues and resolve them



- Build robust systems with an eye on the long term maintenance and support of the application



- Leverage reusable code modules to solve problems across the team and organization



- Utilize a working knowledge of multiple development languages



- Drive cross team design / development via technical leadership / mentoring



- Understands complex multi-tier, multi-platform systems



**What we have:**



- A startup mindset with the backing of a top 10 bank



- Monthly Innovation-Days dedicated to test driving cutting edge technologies



- Flexible work schedules



- Convenient office locations



- Generous salary and merit-based pay incentives



- Your choice of equipment (MacBook/PC, iPhone/Android Device)



\#ilovedata



**Basic Qualifications:**



- Bachelor’s Degree or military experience



- At least 3 years in coding in data management, data warehousing or unstructured data environments



- At least 3 years’ experience with leading big data technologies like Cassandra, Accumulo, HBase, Spark, Hadoop, HDFS, AVRO, MongoDB, Zookeeper



**Preferred Qualifications:**



- Master's Degree



- At least 3 years’ experience with Agile engineering practices



- At least 3 years’ in-depth experience with the Hadoop stack (MapReduce, Pig, Hive, Hbase)



- At least 3 years’ experience with NoSQL implementation (Mongo, Cassandra, etc. a plus)



- At least 3 years’ experience developing Java based software solutions



- At least 3 years’ experience in at least one scripting language (Python, Perl, JavaScript, Shell)



- At least 3 years’ experience developing software solutions to solve complex business problems



- At least 3 years’ experience with Relational Database Systems and SQL



- At least 3 years’ experience designing, developing, and implementing ETL



- At least 3 years’ experience with UNIX/Linux including basic commands and shell scripting



**At this time, Capital One will not sponsor a new applicant for employment authorization for this position.**



At Capital One, we’re building a leading information-based technology company. Still founder-led by Chairman and Chief Executive Officer Richard Fairbank, Capital One is on a mission to help our customers succeed by bringing ingenuity, simplicity, and humanity to banking. We measure our efforts by the success our customers enjoy and the advocacy they exhibit. We are succeeding because they are succeeding.                                               


Featured Employers

Featured Jobs

CareerConnection Video