7 Ofertas de Data Engineer en Venezuela

Data Engineer

Launchpad Technologies Inc.

Publicado hace 14 días

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Join to apply for the Data Engineer role at Launchpad Technologies Inc.

Join to apply for the Data Engineer role at Launchpad Technologies Inc.

Senior Data Engineer – Scalable Pipelines for AI Agent Infrastructure

We’re looking for a Senior Data Engineer to design, build, and maintain the data foundation behind our next-generation AI agent platform. You’ll work closely with AI/ML teams to power training, inference, and continuous learning through highly scalable data pipelines and cloud-native architectures.

If you’re passionate about data infrastructure, performance optimization, and driving data quality at scale, we want to hear from you.

Start date : ASAP

Contract type : Contractor - Indefinite

Work hours : Monday to Friday, 7.30 am to 4.30 pm PST - 100% Remote

️ What You’ll Be Doing

  • Build scalable ETL/ELT pipelines to support AI agent training and inference.
  • Integrate data from diverse sources (APIs, databases, real-time streams).
  • Design high-performance data models using PostgreSQL and Snowflake.
  • Optimize SQL queries for large datasets and data warehouse workloads.
  • Implement data validation, cleansing, and monitoring workflows.
  • Automate data engineering tasks and contribute to CI/CD pipelines.
  • Collaborate with AI/ML engineers and product teams to align data with business needs.
  • Document processes, pipelines, and infrastructure decisions clearly.

What You Need to Succeed

Must-haves

  • 5+ years of experience in Data Engineering or Data Infrastructure roles.
  • Strong command of:
    • ETL frameworks: Airflow, DBT, Spark
    • Databases: PostgreSQL, Snowflake
    • Cloud & DevOps: AWS (S3, EC2), Linux, Docker
    • CI/CD: GitHub, Jenkins or similar
    • Programming: Python (data-focused scripting)
  • Deep knowledge of SQL performance tuning, data warehousing, and data quality practices.
  • Experience with data security and managing large-scale database environments.
  • Comfortable working in cross-functional teams and agile environments.

Nice-to-haves

  • Experience with AWS RDS, MWAA, Lambda.
  • Familiarity with vector databases and retrieval-augmented generation (RAG) pipelines.
  • Exposure to prompt engineering, LLM workflows, or AI/ML data operations.
  • Data Engineering certification (GCP, AWS, or equivalent).

Our Recruitment Process

Here’s what to expect from our candidate-friendly interview process:

  • Initial Interview – 60 minutes with our Talent Acquisition Specialist
  • Culture Fit – 30 minutes with our Team Engagement Manager
  • Technical Assessment - Python Coding + Multiple Choice questions
  • Final Stage – 60 minutes with the Hiring Manager (Technical Interview)

Why Join Launchpad?

We believe that great work starts with great people. At Launchpad, we offer:

  • Fully remote work with hardware provided
  • Global team experience with clients in (regions)
  • Competitive USD compensation
  • Training and learning stipends
  • Paid Time Off (vacation, personal, study)
  • ️ A culture that values autonomy, purpose, and human connection

Join us and help shape the future of intelligent systems—one data pipeline at a time.

Seniority level
  • Seniority level Mid-Senior level
Employment type
  • Employment type Full-time
Job function
  • Job function Information Technology
  • Industries IT Services and IT Consulting

Referrals increase your chances of interviewing at Launchpad Technologies Inc. by 2x

Get notified about new Data Engineer jobs in Venezuela .

Front End Developers - AI Training (Remote) Full Stack Developer - (Node.js, LATAM-based)

Venezuela $60,000.00-$85,000.00 1 month ago

Caracas, Federal District, Venezuela 1 month ago

Caracas, Federal District, Venezuela 1 month ago

Artificial Intelligence Full Stack Engineer Graduate Software Engineer, Open Source and Linux, Canonical Ubuntu

Caracas, Federal District, Venezuela 1 month ago

Software Engineer (Python/Linux/Packaging)

Caracas, Federal District, Venezuela 1 month ago

Software Engineer - Cross-platform C++ - Multipass System Software Engineer - Ubuntu Networking

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

#J-18808-Ljbffr
Lo sentimos, este trabajo no está disponible en su región

Principal Data Engineer

Toptal

Publicado hace 2 días

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Join to apply for the Principal Data Engineer role at Toptal

3 days ago Be among the first 25 applicants

Join to apply for the Principal Data Engineer role at Toptal

Get AI-powered advice on this job and more exclusive features.

About Toptal

Toptal is a global network of top talent in business, design, and technology that enables companies to scale their teams, on-demand. With $200+ million in annual revenue

About Toptal

Toptal is a global network of top talent in business, design, and technology that enables companies to scale their teams, on-demand. With $200+ million in annual revenue and team members based around the globe , Toptal is the world’s largest fully remote workforce.

We take the best elements of virtual teams and combine them with a support structure that encourages innovation, social interaction, and fun. We see no borders, move at a fast pace, and are never afraid to break the mold.

Job Summary:

At Toptal, we measure everything and always rely on data to guide all of our initiatives, including both our long-term strategy and our day-to-day operations. As a Principal Data Engineer, your primary goal is to be one step ahead. You will support Business Analysts, Data Analysts, and Data Scientists by providing infrastructure and tools they can use to deliver end-to-end solutions and business insights. This is more than building and maintaining ETL pipelines–we need innovation, creativity, and solutions that will have a significant impact on our velocity and business impact. We, in turn, will give you autonomy and freedom to turn your ideas into reality.

This is a remote position. We do not offer visa sponsorship or assistance. Resumes and communication must be submitted in English.

Responsibilities:

The following information is intended to describe the general nature and level of work being performed. It is not intended to be an exhaustive list of all duties, responsibilities, or required skills.

  • Lead and contribute to architecting and building a modern data stack that is scalable, maintainable, and highly-performant. This includes building frameworks, data pipelines, and other data infrastructure using a variety of raw data sources.
  • Collaborate with data source providers both internal and external to set rules, processes, and checks that ensure data availability and integrity.
  • Monitor and maintain the data pipelines and ETL processes to proactively remediate issues and preserve data availability.
  • Support the migration from legacy orchestration systems and ETL, including migrating pipelines reading from internal databases and APIs, remapping data sources, and translating transformation logic.
  • Ensure proper governance practices and effective documentation are implemented throughout the ETL migration process.
  • Communicate with team members and convey results efficiently and clearly.

In the first week, expect to:

  • Meet mentors that will help you during your onboarding month.
  • Meet your team, managers, and other key stakeholders like Business Analysts.
  • Start participating in company-wide training sessions.
  • Set up your local environment and become familiar with our tech stack.

In the first month, expect to:

  • Have a comfortable understanding of our data stack and systems.
  • Have a clear understanding of the team’s data strategy and direction for the infrastructure.
  • Develop an understanding of Toptal’s business and offerings.
  • Develop an understanding of Toptal’s different processes and team structure, and identify and meet key technical and business stakeholders.
  • Start contributing to legacy cleanup and migration efforts.

In the first three months, expect to:

  • Have strong knowledge of Toptal’s business.
  • Actively supply daily support to users, development, migration, and maintenance work.
  • Take part in the on-going monitoring and maintenance of data pipelines and processes.
  • Be ready to propose and implement improvements to Toptal’s processes and codebase.
  • Deliver value in a regular cadence.
  • Get comfortable in your daily work within your team.

In the first six months, expect to:

  • Own technical initiatives on our team.
  • Drive improvements to the codebase and processes.
  • Contribute to planning and executing long-term initiatives inside your team.
  • Be able to not only solve complex problems, but also consider multiple solutions, weigh them and decide on the best course of action.
  • Exercising discretion and independent judgment, proactively identify technical debt and product areas that require attention or improvements and suggest improvements in our technology stack.

In the first year, expect to:

  • Have a detailed understanding of Toptal’s business, collaboration rituals, processes, performance, and future work.
  • Determine what your career path looks like at Toptal.
  • Mentor Toptal’s new team members.

Qualifications and Job Requirements:

  • Bachelor’s degree is required.
  • 10+ years of experience working with data infrastructure–architecture, cloud data warehouses, data modeling, ETL tools and processes, and data ingestion techniques.
  • Extensive experience working with Python, Pandas, and SQL
  • Experience with Google Cloud Platform (including Google Cloud Storage and BigQuery), object-oriented programming, CI/CD, and ETL technologies such as Airflow, Luigi, Dagster, and CDC is preferred
  • Outstanding English written and verbal communication skills.
  • Be excited about collaborating daily with your team and other groups while working via a distributed model.
  • Be eager to help your teammates, share your knowledge with them, and learn from them.
  • Be open to receiving constructive feedback.
  • Ability to work in a fast-paced, rapidly growing company and handle a wide variety of challenges, deadlines, and a diverse array of contacts.
  • You must be a world-class individual contributor to thrive at Toptal. You will not be here just to tell other people what to do.

For Toptal Use Only: #individualcontributoreurope #southamerica

Seniority level
  • Seniority level Mid-Senior level
Employment type
  • Employment type Full-time
Job function
  • Job function Information Technology
  • Industries Technology, Information and Internet

Referrals increase your chances of interviewing at Toptal by 2x

Get notified about new Senior Data Engineer jobs in Venezuela .

Python and Kubernetes Software Engineer - Data, Workflows, AI/ML & Analytics Python and Kubernetes Software Engineer - Data, AI/ML & Analytics Full Stack Developer - Data Discovery and Removal Automation Team (Node.js, LATAM-based) Embedded Linux Senior Software Engineer - Optimisation

Caracas, Federal District, Venezuela 1 month ago

Senior Data Scientist - Remote Work | REF#174378

Caracas, Federal District, Venezuela 1 month ago

Senior Software Engineer - packaging - optimize Ubuntu Server Senior Software Engineer - packaging - optimize Ubuntu Server Senior Software Engineer - packaging - optimize Ubuntu Server Senior Software Engineer - packaging - optimize Ubuntu Server

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

#J-18808-Ljbffr
Lo sentimos, este trabajo no está disponible en su región

Senior Data Engineer (Remote - Latam)

Trabajo premium
Remoto AstroSirens LLC

Publicado hace 6 días

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Permanente

Tu función:
Dirigir el diseño de soluciones escalables de almacenamiento y canalización de datos (AWS/GCP/Azure) para clientes globales. Impulsar los procesos ETL, la gobernanza de datos y orientar a los ingenieros junior. 

Lo que aportas:
- Más de 5-6 años de experiencia en ingeniería de datos. 
- Experiencia en SQL, Python/Scala, Spark/Airflow/Kafka. 
- Competencia en la nube (Redshift, BigQuery, S3, Databricks). 
- Inglés fluido (obligatorio). 
- Autónomo en entornos remotos. 

¿Por qué unirse?
- 100 % remoto (Latam) + horario flexible. 
- Salario competitivo + plan de crecimiento. 
- Proyectos de impacto global.

Solicitud: envía tu currículum a

Es importante que domine el idioma ingles de manera nativa o bilingue.

Se realizará entrevista inicial para certificar el nivel de ingles y la experticia en el tema.

Por Favor, Envia tu CV indicando la vacante y el pais a donde perteneces en caso de no ser mexicano, estamos abiertos a toda latinoamerica.

Detalles De La Empresa

AstroSirens es una empresa de consultoría tecnológica y selección de personal con visión de futuro dedicada a ofrecer soluciones de gran impacto a empresas de todo el mundo. Nuestra cultura, que da prioridad al trabajo remoto, fomenta la innovación, la colaboración y el aprendizaje continuo. Actualmente buscamos un ingeniero de datos sénior para que se incorpore a nuestro equipo de ingenieros expertos.
Lo sentimos, este trabajo no está disponible en su región

Big Data Engineer - Remote Work | REF#112992

Caracas, Distrito Capital BairesDev

Publicado hace 14 días

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

At BairesDev, we've been leading the way in technology projects for over 15 years. We deliver cutting-edge solutions to giants like Google and the most innovative startups in Silicon Valley.

Our diverse 4,000+ team, composed of the world's Top 1% of tech talent, works remotely on roles that drive significant impact worldwide.

When you apply for this position, you're taking the first step in a process that goes beyond the ordinary. We aim to align your passions and skills with our vacancies, setting you on a path to exceptional career development and success.

Big Data Engineer at BairesDev

Big Data Engineers will face numerous business-impacting challenges, so they must be ready to use state-of-the-art technologies and be familiar with different IT domains such as Machine Learning, Data Analysis, Mobile, Web, IoT, etc. They are passionate, active members of our community who enjoy sharing knowledge, challenging, and being challenged by others and are genuinely committed to improving themselves and those around them.

What You’ll Do
  • Work alongside Developers, Tech Leads, and Architects to build solutions that transform users’ experience.
  • Impact the core of business by improving existing architecture or creating new ones.
  • Create scalable and high-availability solutions, and contribute to the key differential of each client.
Here Is What We Are Looking For
  • 6+ years of experience working as a Developer (Ruby, Python, Java, JS preferred).
  • 5+ years of experience in Big Data (Comfortable with enterprises' Big Data topics such as Governance, Metadata Management, Data Lineage, Impact Analysis, and Policy Enforcement).
  • Proficient in analysis, troubleshooting, and problem-solving.
  • Experience building data pipelines to handle large volumes of data (either leveraging well-known tools or custom-made ones).
  • Advanced English level.
Desirable
  • Building Data Lakes with Lambda/Kappa/Delta architecture.
  • DataOps, particularly creating and managing batch and real-time data ingestion and processing processes.
  • Hands-on experience with managing data loads and data quality.
  • Modernizing enterprise data warehouses and business intelligence environments with open-source tools.
  • Deploying Big Data solutions to the cloud (Cloudera, AWS, GCP, or Azure).
  • Performing real-time data visualization and time series analysis using open-source and commercial solutions.
How We Make Your Work (and Your Life) Easier
  • 100% remote work (from anywhere).
  • Excellent compensation in USD or your local currency if preferred.
  • Hardware and software setup for you to work from home.
  • Flexible hours: create your own schedule.
  • Paid parental leaves, vacations, and national holidays.
  • Innovative and multicultural work environment: collaborate and learn from the global Top 1% of talent.
  • Supportive environment with mentorship, promotions, skill development, and diverse growth opportunities.

Join a global team where your unique talents can truly thrive!

#J-18808-Ljbffr
Lo sentimos, este trabajo no está disponible en su región

Senior Database Engineer - Data Synchronization Specialist

M365Connect

Publicado hace 11 días

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Join to apply for the Senior Database Engineer - Data Synchronization Specialist role at M365Connect

2 days ago Be among the first 25 applicants

Join to apply for the Senior Database Engineer - Data Synchronization Specialist role at M365Connect

Manatal Database Sync Project

Senior Database Engineer - Data Synchronization Specialist

Company Overview

We are seeking an experienced Database Engineer to architect and maintain our Manatal recruitment database synchronization infrastructure. This role is critical for maintaining data authority, overcoming API throttling limitations, and providing flexible data access across our organization.

Position Summary

The Senior Database Engineer will be responsible for designing, implementing, and maintaining a robust PostgreSQL database that syncs with Manatal's recruitment platform. You'll work closely with our automation engineers to ensure seamless data flow and create comprehensive documentation using professional ERD tools.

Key Responsibilities

  • Design and implement PostgreSQL database schemas optimized for recruitment data synchronization
  • Collaborate with automation engineers to define data flow requirements and sync strategies
  • Configure and manage Docker containers for PostgreSQL, pgAdmin, and other database tools
  • Create and maintain comprehensive ERD documentation using professional tools (Lucidchart, dbdiagram.io, etc.)
  • Monitor database performance and troubleshoot data inconsistencies
  • Implement security best practices for database access
  • Optimize database performance through indexing, query optimization, and maintenance procedures
  • Design data models that support efficient synchronization patterns
  • Maintain detailed technical documentation and data dictionaries
  • Provide training and support to team members on database usage
  • Evaluate and recommend database tools and technologies


Required Skills & Experience

  • Database Expertise:
    • 5+ years of PostgreSQL administration and development
    • Expert knowledge of SQL, including complex queries, stored procedures, and triggers
    • Experience with JSONB data types and polymorphic relationships
    • Database performance tuning and optimization skills
    • Experience with database replication and high availability setups
  • Modern Database Platforms:
    • Experience with Supabase or similar PostgreSQL-based platforms
    • Understanding of real-time subscriptions and row-level security
    • Knowledge of database-as-a-service platforms
    • Experience with database migrations and schema versioning
  • Data Architecture & Design:
    • Strong understanding of data modeling and normalization
    • Experience designing schemas for high-volume synchronization
    • Knowledge of data warehousing concepts
    • Understanding of API data structures and transformation requirements
  • DevOps & Infrastructure:
    • Strong Docker and Docker Compose experience
    • Linux server administration (Ubuntu preferred)
    • Experience with database backup and recovery strategies
    • Version control with Git/GitHub
  • Documentation & Visualization:
    • Proficiency with professional ERD tools (Lucidchart, dbdiagram.io, ERwin, etc.)
    • Experience creating technical documentation and data dictionaries
    • Ability to communicate complex database designs to non-technical stakeholders


Preferred Qualifications

  • Experience with recruitment/ATS systems (Manatal experience is a plus)
  • Familiarity with multiple ERD and database design tools
  • Knowledge of Redis for caching strategies
  • Experience with monitoring tools (Prometheus, Grafana)
  • Understanding of event-driven architectures
  • Experience with data synchronization patterns and conflict resolution


What We Offer

  • Competitive salary based on experience
  • Remote work flexibility
  • Opportunity to architect critical infrastructure
  • Professional development opportunities
  • Direct impact on company data strategy

Seniority level
  • Seniority level Not Applicable
Employment type
  • Employment type Full-time
Job function
  • Job function Information Technology
  • Industries IT Services and IT Consulting

Referrals increase your chances of interviewing at M365Connect by 2x

Sign in to set job alerts for “Senior Database Engineer” roles.

Venezuela $14,400.00-$4,000.00 1 month ago

Front End Developers - AI Training (Remote)

Caracas, Federal District, Venezuela 1 month ago

Caracas, Federal District, Venezuela 4 hours ago

Software Engineer - Solutions Engineering

Caracas, Federal District, Venezuela 4 hours ago

Caracas, Federal District, Venezuela 5 hours ago

Full Stack Developer - Data Discovery and Removal Automation Team (Node.js, LATAM-based) Artificial Intelligence Full Stack Engineer

Venezuela 18,000.00- 36,000.00 1 month ago

Software Engineer (Python/Linux/Packaging)

Caracas, Federal District, Venezuela 4 hours ago

Software Engineer - Cross-platform C++ - Multipass

Caracas, Federal District, Venezuela 5 hours ago

Caracas, Federal District, Venezuela 4 hours ago

Distributed Systems Software Engineer, Python / Go

Caracas, Federal District, Venezuela 4 hours ago

Python Software Engineer - Ubuntu Hardware Certification Team

Caracas, Federal District, Venezuela 1 month ago

Python and Kubernetes Software Engineer - Data, AI/ML & Analytics

We’re unlocking community knowledge in a new way. Experts add insights directly into each article, started with the help of AI.

#J-18808-Ljbffr
Lo sentimos, este trabajo no está disponible en su región

Python and Kubernetes Software Engineer - Data, AI/ML & Analytics

Caracas, Distrito Capital Canonical

Publicado hace 14 días

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Canonical is a leading provider of open source software and operating systems to the global enterprise and technology markets. Our platform, Ubuntu, is very widely used in breakthrough enterprise initiatives such as public cloud, data science, AI, engineering innovation and IoT. Our customers include the world's leading public cloud and silicon providers, and industry leaders in many sectors. The company is a pioneer of global distributed collaboration, with 1000+ colleagues in 70+ countries and very few roles based in offices. Teams meet two to four times yearly in person, in interesting locations around the world, to align on strategy and execution.

The company is founder led, profitable and growing. We are hiring Python and Kubernetes Specialist Engineers focused on Data, AI/ML and Analytics Solutions to join our teams building open source solutions for public cloud and private infrastructure.

As a software engineer on the team, you'll collaborate on an end-to-end data analytics and mlops solution composed of popular, open-source, machine learning tools, such as Kubeflow, MLFlow, DVC, and Feast. You may also work on workflow, ETL, data governance and visualization tools like Apache SuperSet, dbt, and Temporal, or data warehouse solutions such as Apache Trino, or ClickHouse. Your team will own a solution from the analytics and machine learning space, and integrate with the solutions from other teams to build the world's best end-to-end data platform. These solutions may be run on servers or on the cloud, on machines or on Kubernetes, on developer desktops, or as web services.

We serve the needs of individuals and community members as much as the needs of our Global 2000 and Fortune 500 customers; we make our primary work available free of charge and our Pro subscriptions are also available to individuals for personal use at no cost. Our goal is to enable more people to enjoy the benefits of open source, regardless of their circumstances.

Location: This initiative spans many teams that are home-based and in multiple time zones. We believe in distributed collaboration but we also try to ensure that colleagues have company during their work hours! Successful candidates will join a team where most members and your manager are broadly in the same time zone so that you have the benefits of constant collaboration and discussion.

What your day will look like:

  • Develop your understanding of the entire Linux stack, from kernel, networking, and storage, to the application layer
  • Design, build and maintain solutions that will be deployed on public and private clouds and local workstations
  • Master distributed systems concepts such as observability, identity, tracing
  • Work with both Kubernetes and machine-oriented open source applications
  • Collaborate proactively with a distributed team of engineers, designers and product managers
  • Debug issues and interact in public with upstream and Ubuntu communities
  • Generate and discuss ideas, and collaborate on finding good solutions
What we are looking for in you:
  • Professional or academic software delivery using Python
  • Exceptional academic track record from both high school and university
  • Undergraduate degree in a technical subject or a compelling narrative about your alternative chosen path
  • Confidence to respectfully speak up, exchange feedback, and share ideas without hesitation
  • Track record of going above-and-beyond expectations to achieve outstanding results
  • Passion for technology evidenced by personal projects and initiatives
  • The work ethic and confidence to shine alongside motivated colleagues
  • Professional written and spoken English with excellent presentation skills
  • Experience with Linux (Debian or Ubuntu preferred)
  • Excellent interpersonal skills, curiosity, flexibility, and accountability
  • Appreciative of diversity, polite and effective in a multi-cultural, multi-national organisation
  • Thoughtfulness and self-motivation
  • Result-oriented, with a personal drive to meet commitments
  • Ability to travel twice a year, for company events up to two weeks long
Additional Skills That Would Be Nice To Have:
  • Hands-on experience with machine learning libraries, or tools.
  • Proven track record of building highly automated machine learning solutions for the cloud.
  • Experience with container technologies (Docker, LXD, Kubernetes, etc.)
  • Experience with public clouds (AWS, Azure, Google Cloud)
  • Working knowledge of cloud computing
  • Passionate about software quality and testing
  • Experience working on an open source project
What we offer colleagues:

We consider geographical location, experience, and performance in shaping compensation worldwide. We revisit compensation annually (and more often for graduates and associates) to ensure we recognise outstanding performance. In addition to base pay, we offer a performance-driven annual bonus or commission. We provide all team members with additional benefits, which reflect our values and ideals. We balance our programs to meet local needs and ensure fairness globally.

  • Distributed work environment with twice-yearly team sprints in person
  • Personal learning and development budget of USD 2,000 per year
  • Annual compensation review
  • Recognition rewards
  • Annual holiday leave
  • Maternity and paternity leave
  • Employee Assistance Programme
  • Opportunity to travel to new locations to meet colleagues
  • Priority Pass, and travel upgrades for long haul company events
About Canonical:

Canonical is a pioneering tech firm at the forefront of the global move to open source. As the company that publishes Ubuntu, one of the most important open source projects and the platform for AI, IoT and the cloud, we are changing the world of software. We recruit on a global basis and set a very high standard for people joining the company. We expect excellence - in order to succeed, we need to be the best at what we do. Most colleagues at Canonical have worked from home since its inception in 2004. Working here is a step into the future, and will challenge you to think differently, work smarter, learn new skills, and raise your game.

Canonical is an equal opportunity employer:

We are proud to foster a workplace free from discrimination. Diversity of experience, perspectives, and background create a better work environment and better products. Whatever your identity, we will give your application fair consideration.

#J-18808-Ljbffr
Lo sentimos, este trabajo no está disponible en su región

Python and Kubernetes Software Engineer - Data, Workflows, AI/ML & Analytics

Caracas, Distrito Capital Canonical

Publicado hace 11 días

Trabajo visto

Toque nuevamente para cerrar

Descripción Del Trabajo

Canonical is a leading provider of open source software and operating systems to the global enterprise and technology markets. Our platform, Ubuntu, is very widely used in breakthrough enterprise initiatives such as public cloud, data science, AI, engineering innovation and IoT. Our customers include the world's leading public cloud and silicon providers, and industry leaders in many sectors. The company is a pioneer of global distributed collaboration, with 1000+ colleagues in 70+ countries and very few roles based in offices. Teams meet two to four times yearly in person, in interesting locations around the world, to align on strategy and execution.

The company is founder led, profitable and growing. We are hiring Python and Kubernetes Specialist Engineers focused on Data, Workflows, AI/ML and Analytics Solutions to join our teams building open source solutions for public cloud and private infrastructure.

As a software engineer on the team, you'll collaborate on end-to-end data analytics and MLOps solutions composed of popular, open-source, machine learning tools, such as Kubeflow, MLFlow, DVC, and Feast. You may also work on ETL, data governance and visualization tools like Apache SuperSet, dbt, workflow orchestration tools such as Airflow and Temporal, or data warehouse solutions such as Apache Trino, or ClickHouse. These solutions may be run on servers or on the cloud, on machines or on Kubernetes, on developer desktops, or as web services.

We serve the needs of individuals and community members as much as the needs of our Global 2000 and Fortune 500 customers; we make our primary work available free of charge and our Pro subscriptions are also available to individuals for personal use at no cost. Our goal is to enable more people to enjoy the benefits of open source, regardless of their circumstances.

Location: This initiative spans many teams that are home-based and in multiple time zones. We believe in distributed collaboration but we also try to ensure that colleagues have company during their work hours! Successful candidates will join a team where most members and your manager are broadly in the same time zone so that you have the benefits of constant collaboration and discussion.

What your day will look like

  • Develop your understanding of the entire Linux stack, from kernel, networking, and storage, to the application layer
  • Design, build and maintain solutions that will be deployed on public and private clouds and local workstations
  • Master distributed systems concepts such as observability, identity, tracing
  • Work with both Kubernetes and machine-oriented open source applications
  • Collaborate proactively with a distributed team of engineers, designers and product managers
  • Debug issues and interact in public with upstream and Ubuntu communities
  • Generate and discuss ideas, and collaborate on finding good solutions

What we are looking for in you

  • Professional or academic software delivery using Python
  • Exceptional academic track record from both high school and university
  • Undergraduate degree in a technical subject or a compelling narrative about your alternative chosen path
  • Confidence to respectfully speak up, exchange feedback, and share ideas without hesitation
  • Track record of going above-and-beyond expectations to achieve outstanding results
  • Passion for technology evidenced by personal projects and initiatives
  • The work ethic and confidence to shine alongside motivated colleagues
  • Professional written and spoken English with excellent presentation skills
  • Experience with Linux (Debian or Ubuntu preferred)
  • Excellent interpersonal skills, curiosity, flexibility, and accountability
  • Appreciative of diversity, polite and effective in a multi-cultural, multi-national organisation
  • Thoughtfulness and self-motivation
  • Result-oriented, with a personal drive to meet commitments
  • Ability to travel twice a year, for company events up to two weeks long

Additional skills that would be nice to have

The following skills may be helpful to you in the role, but we don't expect everyone to bring all of them.

  • Proven track record of building highly automated machine learning solutions, data pipelines, or orchestrating workflows for the cloud.
  • Hands-on experience with machine learning libraries, or tools.
  • Experience with container technologies (Docker, LXD, Kubernetes, etc.)
  • Experience with public clouds (AWS, Azure, Google Cloud)
  • Working knowledge of cloud computing
  • Passionate about software quality and testing
  • Experience working on an open source project

What we offer colleagues

We consider geographical location, experience, and performance in shaping compensation worldwide. We revisit compensation annually (and more often for graduates and associates) to ensure we recognise outstanding performance. In addition to base pay, we offer a performance-driven annual bonus or commission. We provide all team members with additional benefits, which reflect our values and ideals. We balance our programs to meet local needs and ensure fairness globally.

  • Distributed work environment with twice-yearly team sprints in person
  • Personal learning and development budget of USD 2,000 per year
  • Annual compensation review
  • Recognition rewards
  • Annual holiday leave
  • Maternity and paternity leave
  • Employee Assistance Programme
  • Opportunity to travel to new locations to meet colleagues
  • Priority Pass, and travel upgrades for long haul company events

About Canonical

Canonical is a pioneering tech firm at the forefront of the global move to open source. As the company that publishes Ubuntu, one of the most important open source projects and the platform for AI, IoT and the cloud, we are changing the world of software. We recruit on a global basis and set a very high standard for people joining the company. We expect excellence - in order to succeed, we need to be the best at what we do. Most colleagues at Canonical have worked from home since its inception in 2004. Working here is a step into the future, and will challenge you to think differently, work smarter, learn new skills, and raise your game.

Canonical is an equal opportunity employer

We are proud to foster a workplace free from discrimination. Diversity of experience, perspectives, and background create a better work environment and better products. Whatever your identity, we will give your application fair consideration.

#J-18808-Ljbffr
Lo sentimos, este trabajo no está disponible en su región
Sé el primero en saberlo

Acerca de lo último Data engineer Empleos en Venezuela !

Ubicaciones cercanas

Otros trabajos cerca de mí

Industria

  1. gavelAdministración Pública
  2. workAdministrativo
  3. ecoAgricultura y Silvicultura
  4. restaurantAlimentos y Restaurantes
  5. apartmentArquitectura
  6. paletteArte y Cultura
  7. diversity_3Asistencia Social
  8. directions_carAutomoción
  9. flight_takeoffAviación
  10. account_balanceBanca y Finanzas
  11. spaBelleza y Bienestar
  12. shopping_bagBienes de consumo masivo (FMCG)
  13. point_of_saleComercial y Ventas
  14. shopping_cartComercio Electrónico y Medios Sociales
  15. shopping_cartCompras
  16. constructionConstrucción
  17. supervisor_accountConsultoría de Gestión
  18. person_searchConsultoría de Selección de Personal
  19. request_quoteContabilidad
  20. brushCreativo y Digital
  21. currency_bitcoinCriptomonedas y Blockchain
  22. health_and_safetyCuidado de la Salud
  23. schoolEducación y Formación
  24. boltEnergía
  25. medical_servicesEnfermería
  26. biotechFarmacéutico
  27. manage_accountsGestión
  28. checklist_rtlGestión de Proyectos
  29. child_friendlyGuarderías y Educación Infantil
  30. local_gas_stationHidrocarburos
  31. beach_accessHostelería y Turismo
  32. codeInformática y Software
  33. foundationIngeniería Civil
  34. electrical_servicesIngeniería Eléctrica
  35. precision_manufacturingIngeniería Industrial
  36. buildIngeniería Mecánica
  37. scienceIngeniería Química
  38. handymanInstalación y Mantenimiento
  39. smart_toyInteligencia Artificial y Tecnologías Emergentes
  40. scienceInvestigación y Desarrollo
  41. gavelLegal
  42. clean_handsLimpieza y Saneamiento
  43. inventory_2Logística y Almacenamiento
  44. factoryManufactura y Producción
  45. campaignMarketing
  46. local_hospitalMedicina
  47. perm_mediaMedios y Relaciones Públicas
  48. constructionMinería
  49. sports_soccerOcio y Deportes
  50. medical_servicesOdontología
  51. schoolPrácticas
  52. emoji_eventsRecién Graduados
  53. groupsRecursos Humanos
  54. securitySeguridad de la Información
  55. local_policeSeguridad y Vigilancia
  56. policySeguros
  57. support_agentServicio al Cliente
  58. home_workServicios Inmobiliarios
  59. diversity_3Servicios Sociales
  60. wifiTelecomunicaciones
  61. psychologyTerapia
  62. local_shippingTransporte
  63. storeVenta al por menor
  64. petsVeterinaria
Ver todo Data Engineer Empleos