Redshift jobs
Looking for Backend Developer in python who have a Good knowledge in Python, pycharm,Developing Django Application , Deployment and AWS services such as S3, Redshift, Dynamodb, SQL, MySQL etc
Project Title: Expertise in Python (pycharm), AWS s3, redshift, dynamodb, lambdas, Git, Jenkins, and Aurora I am looking for a skilled Python developer with experience in AWS services and Git version control. The ideal candidate should be familiar with the following: AWS Services: - AWS S3 - Redshift - DynamoDB - Lambda - Aurora Git and Jenkins: - Proficient in using Git for version control - Knowledge of Jenkins for continuous integration and deployment The candidate should also be good with Unit testing in Python. If you have the required skills and experience in Python, AWS services, and Git, please apply for this project.
Time : 9am to 1 pm Total duration : 20hr Total 5 days of training Toc/syllabus attached
I am looking for an experienced AWS data engineer who can assist me with Serverless Redshift and PySpark. I do not need help with setting up a system of automation, but I may require assistance with running analytics on the data. The ideal candidate should have experience with the following: - Serverless Redshift - PySpark Skills and experience required for this project: - Strong knowledge of AWS services, particularly Serverless Redshift and PySpark - Experience in data engineering and analytics - Familiarity with S3, Lambda, Boto3, and step functions would be a plus - Ability to work independently and efficiently - Excellent problem-solving and communication skills Working time = 8:30 PM EST to 10:30 PM EST (6 AM IST to 8 AM IST) Duration = 3 to 6 months
We need AWS Data Engineer (Redshift, Talend is must) to work for 2 hours a day remotely. We will give 25-28k per month
Task 1: Create an Infrastructure as Code (in Cloud Formation Python) to spin-up an S3 bucket, 2 EC2 instances, 1 RDS. Also create an auto-scaling capability in when the EC2 CPU usage goes more than 80%. Please demo the code and deploy it in AWS to show it works. Taks 2: Question 1: Create a project in GIT in GIT CI, upload a sample code from Task 1, fork the code and add AWS Redshift service to the code, run the pipeline, merge it back to the main branch. Task 3: Describe with an architecture diagram on how we can migrate an Weather Forecast application installed in EC2 to AWS Faregate in details step by step guide.
Role & Responsibilities: ● Work with cloud engineers and customers to solve for ...developing Big Data solutions (migration, storage, processing) ● Experience in SQL and Query optimisation ● Ability to clearly communicate technical roadmap, challenges and mitigation ● Experience building and supporting large-scale systems in a production environment Technology Stack: ● Cloud Platforms – AWS ● Mandatory – High programming skill in Python and Pyspark, Hands-on experience with the AWS Redshift ● Nice to have - Experience in Bigdata Technologies such as Hive, Spark, Lambda, AWS Cloud with focus on AWS Glue to create ETL pipelines ● Design Dimensional Models for Data Warehouse and Data Marts ● Knowledge of version control systems, build automation and CI/CD tools and fram...
Looking to have ometria rest api set up to connect into redshift database
I am looking for an experienced AWS Redshift expert to help me with data modeling. The ideal candidate should have experience with query optimization and be able to troubleshoot performance issues. This is an ongoing gig, and will be paid as we go. I need initial handholding. Person with good communication skills in English is preferred. Someone that can get onto a phone call for 15 mins sessions is preferred. Occasionally I may have some questions about AWS Redshift implementation.
Looking for a tutor who can help me with Redshift, Oracle, and NoSQL. Specifically, I need the most help with Redshift. My current level of expertise in Redshift, Oracle, and NoSQL is beginner. I require tutoring sessions three times a week or more. Ideal skills and experience for the job include: - Strong understanding of Redshift, Oracle, and NoSQL and experience working with it - In-depth knowledge of Redshift, Oracle, and NoSQL and the ability to explain concepts to a beginner - Familiarity with NoSQL databases - Excellent communication skills and the ability to explain complex concepts in a simple and understandable way - Availability to have tutoring sessions three times a week or more If you have the necessary skills and experience to help me wi...
I am looking for a freelancer with experience in Redshift SQL Query Development to assist me with a Data Analysis project. The ideal candidate should have the following skills and experience: Skills: - Proficient in Redshift SQL Query Development - Experience in Data Analysis Experience: - Previous experience in developing intermediate level queries - Ability to work with complex data sets The project involves: - Developing an intermediate level Redshift SQL query for data analysis - Working with a specific data set provided by the client - Providing insights and recommendations based on the analysis If you have the necessary skills and experience, please apply for this project.
We need a engineer to migrate Teradata to AWS RedShift. Please reach us in case you know both Teradata and RedShift. Thanks.
I'm looking for a freelancer to help with a project that includes database migration. The database I'm working with is a Relational database (MySQL) that needs to be migrated to Redshift. Successful applicants should include evidence of relevant past work in their application. In addition to the database migration, the project will also require extensive SQL coding, use of cloud platforms such as AWS and Redshift, and experience with advanced SQL. Candidates must also have experience with Data Warehousing and an Analytics Platform – the specific platform will depend on the nature of the project. As such, I'm seeking an experienced developer, ideally with a background in database migration who can demonstrate expertise in the tools and skills required. D...
My project is to install an array of VFX software on a Mac OSX system. This includes a number of independent license installers for Houdini, Renderman, Redshift, and Arnold. I need assistance with the setup and installation for these indie licenses on this operating system. I already have the licenses so just need the right kind of help to install the software correctly. This is a small, specific project that requires knowledge of the software, operating system, and ability to manage software installation in order to complete.
AWS Data Engineer with min of 6 to 9 years of experience (JD1) · Collaborate with business analysts to understand and gather requirements for existing or new ETL pip...AWS Data Engineer with min of 5 to 7 years of experience (JD 2) · Experience with AWS ( Glue, Lambda, Appflow, DyanmoDB,Athena, Step function,S3) · Experience with relational SQL and NoSQL databases like Mysql, Postgres, Mongodb and Cassandra. · Experience with data pipeline tools like Airflow, etc. · Experience with AWS cloud services like: EC2, S3, EMR, RDS, Redshift, BigQuery · Experience with stream-processing systems like: Storm, Spark-Streaming, Flink etc. · Experience with object-oriented/object funct...
Looking for Data Engineers having atleast 3 years of experience in end to ETL/ELT, data transformation...platforms can also apply Only candidates who can work in US timezone (EST/CST) apply AWS services such as Glue, Lambda, Athena, S3, SNS, Kinesis, Data-Pipelines, Pyspark, etc. Kafka/Kafka Connect, Spark, Flink or AWS Kinesis Apache Nifi Dataflow Kubernetes AWS Data Pipeline Snowflake GCP tools - GCS, GKE, BigQuery, Cloud SQL, Cloud Connector Golang Airflow Typescript Data: DBT, Fivetran, Redshift, PostgreSQL Infra: GitHub, Bazel, Docker Azure Data Factory Azure Databricks DAX MDX Terraform Visualization tools knowledge - Data Studio, Amplitude, Tableau Languages - Scala, Python, SQL Azure Synapse Mapping Data Flow Interested candidates can whatsapp on +1-81...
...EMR, Hadoop, and AWS services and Pyspark · Proficiency with Data Processing: HDFS, Hive, Spark, Python. · Strong analytic skills related to working with structured, semi structured and unstructured datasets. · Expertise in AWS cloud native services. · Good knowledge of any RDBMS/NoSQL database with strong SQL writing skills · Experience on Datawarehouse tools like Redshift · Experience in Deployment and migration of various workloads to Cloud services from traditional infrastructure or other Clouds. · Strong analytical and problem-solving capability · Excellent verbal and written communications skills · Ability to collaborate effectively across global teams &middo...
Need to understand something on AWS Redshift and hopefully create a small query/report from data we have in AWS Redshift
Hi, I have a simple 3D Geometry .FBX file that I made in Blender that I would like to have a Growth Simulation done in Houdini and rendered in Redshift. The geometry is very similar and nothing heavy, its just basically a decorated torus. You don't need to render it, I can do that (unless it's best if you do it) We can talk about this and how the animation would go! The .fbx file has all the colours and materials already. 240 frames. I can share with you everything if you are interested! Thank you!
Need someone who has good experience in spark,redshift,s3 and aws glue
Data Migration from RDBMS to AWS S3 and Redshift 1. Creating a framework that converts scripting languages like PLSQL, BTEQ etc to Python and PySpark to use Databricks as a compute. 2. A framework that converts the existing RDBMS scripts to Python or PySpark to readily use in AWS databricks compute. Need someone who had done this before or part of this. Should be able to give some used cases on how they implemented it. Main RDBMS being used is Teradata and BTEQ scripting.
Having expertise in AWS CLOUD • Designing and deploying dynamically scalable, available, fault-tolerant, and re...Cloud Watch, Lambda,Quick Sight,Red Shift. • Experience in Automation the AWS resources deployment using IAC (Terraform) • Code writing skills (python)for serverless Lambda • Experience in deploying Open VPN Cloud setup for Security • Monitoring infrastructure health, security using sass application (prowler ,cloud spoit ) • Designed the Dashboard using QuickSight using direct query with RDS & REDSHIFT • Selecting appropriate Cloud services to design and deploy an application based on given requirements • Implementing cost-control strategies • Understanding of application lifecycle management • Understanding in t...
Having expertise in AWS CLOUD • Designing and deploying dynamically scalable, available, fault-tolerant, and re...Cloud Watch, Lambda,Quick Sight,Red Shift. • Experience in Automation the AWS resources deployment using IAC (Terraform) • Code writing skills (python)for serverless Lambda • Experience in deploying Open VPN Cloud setup for Security • Monitoring infrastructure health, security using sass application (prowler ,cloud spoit ) • Designed the Dashboard using QuickSight using direct query with RDS & REDSHIFT • Selecting appropriate Cloud services to design and deploy an application based on given requirements • Implementing cost-control strategies • Understanding of application lifecycle management • Understanding in t...
...creating updates and maintaining the ETL jobs with same technology stack. If you are interested in technological innovations and are still looking for new sources of knowledge, we would like to welcome you on board. Check below what we offer and what we expect. Our requirements: 1. At least 2 to 3 years of relevant experience as Big Data Engineer, Understanding of MongoDB(NoSql database) and redshift database 2. Min 2 years of relevant hands-on Application Development experience into Scala with Spark framework Experience in building modern and scalable REST–based microservices using Scala with Spark framework. 3. Expertise with functional programming using Scala Experience in implementing RESTful web services in Scala, Experience into No SQL/ SQL databases. 4. Should have ...
We are looking for support on Data Engineer (AWS glue, Athena, Redshift, Python and Snowflake). We will give 23-25k per month
...to upload some project for learning purpose to AWS 1- Like create CI/ CD pipelines jekins etc 2- add some security features 3 - how to secure servers with multple staff logins 4 - Teach me how to create EC2 instances and other related concepts with practical 3 - S3 buckets and their policies - cloud formations - beanstalk - cloudfront - kinesis, SQS,SNS - Amazon dynamoDB, and other - Aurora - Redshift and other database practical - clooud watch,cloud trail 4 -Some microservices 5 - Docker containers 6 - and VPC concepts with practical so i i can build my confidence and learn faster as well - And few other services 7 - As i am mostly concentrating on python dont have much time to spend on AWS so with someone help i can make this process faster. Any idea how much would you charge ...
...stack that includes custom web crawlers hosted in AWS EC2 and S3, publishing applications in Snowflake and Redshift, and processing applications in AWS Redshift, AWS Glue, and Snowflake/Snowpipe. We use Sigma for data visualization because it is very easy to develop in, integrates extremely well with Snowflake, and can handle very large datasets with high performance. The application this role will build and run will need to track operations across this entire stack, including monitoring and alerting on operations parameters as well as data continuity at the field level. This position requires a combination of process management and development skills. Strong experience with both Redshift and Snowflake are required, as is experience building python applications...
Outcome expected: Build UI which should allow to select redshift schema (UI might have additional restrictions which schemas can be selected )which will be copied to S3 bucket in other environment. (here we have 2 redshift databases which are in two different env. (for Ex: A1 & A2). which doesn't have direct access to each other. So we should have to copy the schemas from A1 Redshift to A1 s3 bucket and A1 s3 bucket to A2 s3 bucket and then A2 s3 bucket to A2 redshift databases. By click of button we would to be able to initiate copy operation Every operation invocation must create audit record containing who performed operation when it happened, complete details of copy source and approval comments. Unload and Copy operation progress should be vie...
Data modeling for lending business. Loading data from multiple systems into AWS S3 buckets. Finally the data has to be loaded into Amazon Redshift
Hi, We are team of 13 developers, and we are expanding. We are looking for Machine Learning Engineer with 3+ years of experience. Your main role and responsibility is to build an algorithm from scratch or modify existing algorithm for our SaaS Product. This backend work is not a common backend API development. It has complex flow and process to make i...Python and common machine learning frameworks - Has a good mathematical and theoretical understanding of machine learning fundamentals - Has significant experience building and deploying machine learning applications at scale - Has a solid understanding of computer science fundamentals like algorithms You are good at: - Python - Machine Learning - Big data and ETL Pipeline (AWS Redshift) - AWS for Machine Learning ...
Can you create a Azure Data Factory pipeline which reads a parquet file from Blob Storage and writes into Redshift or Synapse or Snowflake Use Azure Databricks for basic Transformation. Blob Storage --> Azure Databricks - > Redshift
Create DaaS using structured data residing on Redshift. DaaS is a collection of template based reports with filters offered in different combinations to several subscription levels.
Need a technical author who has experience in writing on topics like AWS Azure GCP DigitalOcean Heroku Alibaba Linux Unix Windows Server (Active Directory) MySQL PostgreSQL SQL Server Oracle MongoDB Apache Cassandra Couchbase Neo4J DynamoDB Amazon Redshift Azure Synapse Google BigQuery Snowflake SQL Data Modelling ETL tools (Informatica, SSIS, Talend, Azure Data Factory, etc.) Data Pipelines Hadoop framework services (e.g. HDFS, Sqoop, Pig, Hive, Impala, Hbase, Flume, Zookeeper, etc.) Spark (EMR, Databricks etc.) Tableau PowerBI Artificial Intelligence Machine Learning Natural Language Processing Python C++ C# Java Ruby Golang Node.js JavaScript .NET Swift Android Shell scripting Powershell HTML5 AngularJS ReactJS VueJS Django Flask Git CI/CD (Jenkins, Bamboo, TeamCity, Octopus Depl...
--ROLE-- The AWS DevOps Engineer will be working closely with the founders of a startup to design and create an AWS cloud infrastructur...can come into our London office early on in the project to meet the team, that would be a bonus. However, we are also open to fully remote working for the right candidate. --RESPONSIBILITIES-- • Designing and implementing cloud infrastructure • Implementing the CI/CD pipeline preferably with GitHub • Security and performance • Networking --EXPERIENCE REQUIRED-- • AWS resources (RDS, DynamoDB, Redshift, Lambda, API Gateway, Event Bridge, EC2) • Big data infrastructure • Infrastructure as code with Terraform --DESIRABLE EXPERIENCE-- • Data lake and data warehouse --THE COMPANY-- Early stage startup driving...
i created this project (a project builds an **ELT pipeline** that extracts data from **S3**, stages them in **Redshift**, and transforms data into a set of **dimensional tables** for Sparkify analytics team to continue finding insights in what songs their users are listening to) it is very simple and it is all ready and done. but i have one issues: Unable to run Please address the issue noted below. The script, results in the below error: "screen shot". the project is attached
Create a Data pipeline using Airflow ( S3-->Data Bricks--->Redshift ). Composer
Can you create a sample Data Pipeline using Apache Airflow. Source: S3 Target: Redshift
Help needed with updating AWS cloudformation template (yaml) related to RedShift, secrets, glue, lambda, etc.
Hi I am a data scientist working in a travel company. I need help with my day to day tasks. So, this won't be a one time project but a daily support for my job. The tools I mainly work on - 1. Amazon Sagemaker 2. Amazon S3 and Redshift 3. Amazon Lambda 4. Google Colab notebooks And few others which we can discuss about later.
Hi need a python script which can pickup data from a sharepoint list and push it to a redshift table.
I need to move sample AdventureWork database from my SQL server to the AWS redshift or RDS using Airflow or kafka
Amazon Seller API integration with third-party tools Key Skills: SP API AWS - EC2 , S3, IAM Amazon Redshift AWS Lambada We would discuss project details with a more suitable candidate
Skill sets: Python programming language, Spark, Kafka / Kinesis, EMR / Glue, S3, Redshift, Airflow, Jenkins) Activities developers need to perform Load data in S3 Perform ETL functions in glue Data tiering on S3 Filter, join, aggregation Move data to Redshift EMR Apache Airflow for overall orchestrator They will use pipeline on Glue Python skill Move code from Dev to Production
Looking for Data Engineer Full time Experience- 5-8 Years Primary Skills- S3, AWS Redshift, Pyspark, AWS Glue, Python, SQL Working Days - Mon to Fri Shift- Indian Shift
AMI or AEIMS members ONLY Hi, A small team of 3D animators is seeking an additional C4D animator, preferably with XP and Redshift knowledge. Seeking a medical illustrator. Please only apply if you are a professional medical illustrator AMI or AEIMS and/or have a specific degree in medical illustration. Work is primarily dental and orthodontic. Established workflow and library. Casual and friendly work from home environment. Detailed 3D storyboard and Skype based assistance provided. Gig will likely last one to two years, could be longer. Prefer someone with a PC workstation with a 3000 series RTX GPU or several 2000 series GPUs. High speed internet recommended.
...promote them from Dev to QA to Prod Build and host API Services Build tooling to allow users to deploy Micro-services to production with a simple set of commands (Terraform) Profile: Bachelor's degree in Computer Science or related technical field. Software Engineer who is an expert in Cloud Infrastructure and Architecture (AWS) Experience with Amazon Web Services (AWS): Lambda, Amazon Redshift, Glue, EKS, Athena, API Gateway Programming experience with Unix/Shell scripting Hands-on experience with SQL and Python Hands-on experience with orchestration tools such as Airflow and DBT Strong problem-solving skills, research, and analytical thinking Poses solid troubleshooting skills Ability to work in a fast-paced and agile development environment Preferable exper...
You will find instructions in the PDF attached. For this data task, you are expected to write a piece of SQL code. If possible, stick to using Redshift SQL. Feel free to add any comments to your code to explain your procedure. process. The solving deadline is November 05th 10.00 AM (GMT+1)
This opening is for a stealth startup and is unpaid. It's only for experience and to help with developing a trillion dollar software idea! Currently seeking a part-time Full Stack Developer for this project. Contact only if you know the following requirements: We need to develop with a fast JavaScript framework! with Bun AWS S3, EC2, Redshift, Aurora, DocumentDB, DynamoDB and likewise Please message me for project details!
Skillset Needed: Defect resolution and production support of Big data ETL development using AWS native services Create data pipeline architecture by designing and implementing data ingestion solutions Integrate data sets using AWS services such as Glue, Lambda functions Design and optimize data models on AWS Cloud using AWS data stores such as Redshift, RDS, S3, Athena Author ETL processes using Python, Pyspark ETL process monitoring using Cloudwatch events You will be working in collaboration with other teams We are looking for a engineer to resolve these issues described below in our AWS environment. Enable paging through data returned from each API using the offset field. Delta Load enablement for Dimension tables (16), Fact tables(6), and Derived Tables(4) Go back in time a...