1 Read data from different sources like Oracle, IBM DB2, XML.
2 Write data to different target systems like S3, HIVE, IBM DB2, Oracle in different formats (JSON, Parquet)
3 Apply various transformation logics/business rules on data before pushing it into Target.
This job is a long term at least run for months
26 freelancers are bidding on average $1117 for this job
Hi there, I have checked the details I have great experience with Hadoop, Java, Map Reduce, Scala, Spark. Please start the chat so we can discuss this job more in detail. Thanks
I hope to see you in chat. I am an experienced Spark Scala developer with full-stack knowledge and career. I'm sure I can do this perfectly. Thanks for your kind attention.
Hello I am working in Hadoop Bigdata technologies for more than 7 years. I worked in Spark Kafka streaming, ELK stack, Cassandra Hive Hbase. Can we talk further on this? Thanks
Hello How are you? I read your job description carefully I'm sure I can build high quality and work with you long term Lets discuss more details via chat Kind Regards