Check out the job opportunities below and send your CV/cover letter to jobs[at]segmentino[dot]com
At Segmentino, our work makes a difference on how people purchase online every day. Do you want the opportunity to contribute, grow and learn every day? We’re looking to hire individuals who are enthusiastic and eager to contribute to driving results that matter whether you are developing code for a public-facing website, or ensuring that our latest deployment is bug-free.
Do you enjoy working with amazing people? If you are solution oriented, thrive in a fast paced, agile environment then come work for a company that has been backed by market leader companies.
We are seeking an experienced and versatile data ops engineer. The ideal contributor will work with a skilled data engineering team developing and operating enterprise-grade data pipelines and services supporting process optimization using agile analytics, data lakes, and data ops platforms.
You can be a DataOps Engineer, but at Segmentino your experience contributes to a mission. Influence the solution!
“Segmentino is an Equal Opportunity Employer (Minority/Female/Disability/Vet)”
- Strong experience with environment and deployment automation, infrastructure-as-code, deployment data pipeline specification and development.
- Develop and deliver ongoing releases using tiered data pipelines and continuous integration tools like Jenkins
- Ability to embed with development teams to ensure system reliability and performance. Good working knowledge of deploying large scale data warehouse and/or analytics services.
- Develop automated routines for database environments including ETL/ELT procedures, replication, log shipping, performance tuning, unit testing, and backups
- Proficient in usage of distributed revision control systems with branching, tagging (git). Create and maintain release and update processes using open source build tools
- Experience managing applications in Amazon Web Services or Microsoft Azure and familiarity with all the core, compute, networking, storage, security, compliance, serverless, and analytics offerings
- Specify and manage the provisioning of deployment environments using tools like Cloud formation, Terraform, Puppet, Chef, and Ansible
- Manage backend data from multiple systems. Maintain, clean, and correct to ensure accurate reporting.
- Create, manage, and automate ETL processes by creating integrations with various API’s.
- Assist other team members with various data analysis projects as needed.
- Excellent documentation habits
*Other similar professional duties maybe assigned as needed
- Must hold at least a bachelor’s degree in Engineering, Computer Science or relative discipline.
- Strong 5 years plus experience including Data Management, Data Analytics and DevOps practices.
- Expert level SQL, NoSQL and Hadoop development, management, operations experience
- Proven experience with Open Source software including OpenShift, CloudForms, Jenkins, Terraform, Kettle, PostgreSQL, Mongo DB, etc. Strong scripting skills, preferably with Python
- Strong organizational, communication skills, and detail oriented focus
Employer will accept a suitable combination of education, training or experience.
Recommending products to online customers – yes, that is the space that we work in. We blend a variety of disciplines (such as recommender systems, data mining, NLP, big data and ML) to turn an intractable problem into a hard problem. For example: just because someone bought a purple dress does not mean that purple is the right color for them for all seasons; neither does it mean that you should recommend purple accessories. Just when someone buys a sofa, which rug in Amazon’s vast selection would it make most sense to pair with it? While there are other recommendation problems out there, none of them capture the scale of the problem, the rapidly changing inventory along with unique problems of fit, fabric and finish.
So how do we do it? We are a full stack team that own components all the way from dataset generation, high performance service oriented architecture to building great UIs that can surface recommendations in a pleasing and aesthetically appealing way. We work with engineers, scientists and product managers. As for techniques, we experiment with collaborative filtering, matrix factorization and a diverse set of supervised learning algorithms such as Neural Net and Learn to Rank. If you want to be on the cutting edge of personalization in ecommerce retail and reaching customers in a unique and valuable space, this is the team to be on! We have the mandate and ability to effect big changes, we just need the right person to begin.
As a Software Development Engineer, you will work with other dev teams, category owners, engineers, scientists and program managers. You will own and build infrastructure that accesses terabyte of data to produce and deliver datasets with low latency and high reliability. The goal is to innovate new discovery features that can make a huge impact on the customer experience. What recommendations data could you use to make search more personalized? How can we use product similarity and sales data to augment and enrich our recommendations? How can we influence customers to drive long term value to our clients?
We are looking for people who innovate, love solving hard problems, understand both technology and business, and have a great judgement skills. You will be able to build systems that will impact millions of customers and create multi-million dollar revenue opportunities, ship in just a couple of weeks and instantly measure the impact of what you have developed. And be able to go home and show your friends and family how you change the world!
· Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key features and processes.
· Work closely with stakeholders to optimize various business operations
· Developing recommendation algorithms to power features on our client
· Research and implement novel statistical approaches
· Inferring customer intent to suggest relevant items, categories or brands to all customers
· Developing proper API for our client
· Reducing latency to make the shopping experience blazingly fast
· Preferably Bachelor’s degree in Computer Science, Mathematics, Statistics, related field, or equivalent experience.
· 2+ years of experience in software development and building large platform systems
· Experience building complex software systems that have been successfully delivered to customers
· Computer Science fundamentals in object-oriented design, data structures, algorithm design, problem solving, and complexity analysis
· Proficiency in, at least, one modern programming language such as C, C++, or Python
· 5+ years of hands-on experience in predictive modeling and analysis
· Experience with distributed machine learning systems
· Ability to take a project from scoping requirements through actual launch of the project
· Experience in communicating with users, other technical teams, and management to collect requirements, describe software product features, and technical designs.
· Ph.D. in Computer Science, Mathematics, Statistics, related field, or equivalent experience.
· Experience with Spark, Hadoop, MapReduce