search
backpage.com > San Francisco jobs > San Francisco trades & labor jobs

Posted: Tuesday, June 27, 2017 1:29 PM

Job Description:/h3:
Who You Are:
Youre a data hound. Youre comfortable pouring over billions of dynamic data points and bending them to your will. Youve leveraged the Apache stack to build persistent machine learning algorithms and youve played inside Apache Hive data warehouses. You know what it takes to set up Apache big data stacks and apply persistent machine learning models using Spark single:handedly.

As our first Machine Learning Engineer, you will own, along with our Data Engineers, the design, implementation, and optimization of our core data and data modeling products. Youll help us apply artificial intelligence to our market leading real estate agent matching algorithm, build our data warehouse infrastructure, and pioneer innovative business cases for using big data in real estate.

What Youll Do:
:
Create predictive models for real estate transactions using a dynamic dataset
:
Refine our agent matching algorithm to improve our core product
:
Design, model and build a large:scale data warehouse, using ETL and other related technologies.
:
Work closely with data scientists to optimize and productize machine learning models.
:
Work closely with engineering to implement and scale your machine learning models
:
Build infrastructure and systems for tracking data quality, usage, and consistency.
:
Design and develop new data products
:
Lead innovative ideas for solving challenges in real estate domain
:
Be the thought:leader for how we could use data and predictive algorithms to improve user experience and revenue potential from the AI products

You Have:
:
B.S. or M.S.. in Computer Science or a related field
:
3:5 years of experience in developing a data pipeline with custom ETL that accommodates data from multiple sources of data in multiple formats
:
Familiarity with at least one scripting language: R, Python, or Scala.
:
Experience using SQL to query databases.
:
Expertise in end to end big data architecture including the ability to design pipelines, design machine learning environment and work with data scientists to create persistent machine learning models.
:
Expertise in Hadoop ecosystem products and frameworks such as HDFS, Hbase, Sqoop/Flume/Kafka, Kudu.
:
Model production in Spark using MLlib
:
Application deployment using AWS, Virtual Machines, or Docker
:
Minimum 2 Years of Data engineering experience in Hadoop environment.
Company Description:/h3:
What We Do
HomeLight is the first platform to use data and agent reviews to connect home buyers and sellers with the best real estate agents in their area. By analyzing tens of millions of transactions we can recommend the best and most relevant real estate agents to the consumer. We launched in November 2012 and are growing quickly.
Why Join HomeLight?
Youll be part of a small, tightly:knit team based in the sunny SoMa neighborhood of San Francisco. Free of big:company politics, you can help make products that will affect people in the real world. As an early team member, youll get to help shape product direction, own outcomes and learn from phenomenally talented colleagues.

Source: https://www.tiptopjob.com/jobs/68933835_job.asp?source=backpage


• Location: San Francisco

• Post ID: 71752673 sf
sf.backpage.com is an interactive computer service that enables access by multiple users and should not be treated as the publisher or speaker of any information provided by another information content provider. © 2017 backpage.com