View Our Website View All Jobs

Senior Data Engineer


Personal Capital is a next generation financial advisor. Our award-winning free tools and mobile apps give you a complete picture of your net worth, and we offer comprehensive financial advisory services previously only available to the ultra wealthy. We use technology to revolutionize the financial industry by making it more affordable, accessible, and honest. Our service starts with our free award-winning tools, which enable you to track your entire financial life in one place. Our advisors work one-on-one with clients to help develop sound, long-term investing plans.


You will work with the Data and Analytics team to design, develop, test, and implement highly scalable data pipelines to facilitate data mining, analysis, reporting, operating performance, and integration of disparate systems. You will create complex classifiers, predictive models and other machine learning techniques to provide insights and integrate analytic data with our applications. As a member of this team, you’ll collaborate with Data Architects, Data Scientists, Business Analysts and stakeholders to maximize the utilization of our rich set of data.


We use agile development methodology and expect our team members to be self-motivated, work well independently, and manage all aspects of the software development life-cycle from coding to deployment. We look for individuals with solid critical thinking skills, ability to synthesize complex problems and create processes that provide data sets that can be analyzed by data scientist and business analysts. Excellent written and speaking communication skills are required as we work in a collaborative cross-functional environment and interact with the full spectrum of business divisions.

Minimum Skills and Experience

  • A Bachelor of Science degree in Computer Science or equivalent.
  • Five or more years experience with production level Java (preferred) or Python programming.
  • Three or more years of experience writing complex SQL statements.
  • Experience building and maintaining a Data Warehouse or Data Lake (preferably in Amazon Redshift).
  • Experience writing API code that interfaces with external systems and is processed through an ETL pipeline.
  • Experience with ETL processes including dimensionalization, star and snowflake schema designs.
  • Experience with statistical analysis and tools such as R and/or Python.
  • Experience with data analytic visualization tools.
  • Experience in machine learning techniques and classifier algorithms.

Desired additional skills and experience

  • Amazon ecosystem including AWS tools such as Kinesis Stream, Kinesis Analytics, Kinesis Firehose, Elastic Mapreduce, and Lambda.
  • Experience with Big Data technologies like Hadoop, Spark.
  • Experience in the financial services industry.
  • Experience working with manipulating and analyzing large volumes of data.
Read More

Apply for this position

Apply with Indeed
Attach resume as .pdf, .doc, or .docx (limit 2MB) or Paste resume

Paste your resume here or Attach resume file