These days, 2D just isn’t enough. You don’t have to look any further than your local cinema.
I am looking for a male who is expert in ETL, Informatica, SQL, PL/SQL with some good knowledge on Big Data, Hadoop, Sqoop, Hive, Hbase and pig. Good experience in Design patterns, ETL and Frameworks. Need good communication skills and available to take phone interview for a contracting position in US time zone during working hours. I will send the job description well in advance. There might be multiple phone interview rounds, and the funds will only be transferred only if the interview is success and the candidate is selected.
I need someone to develop a full JSON parser for Hive to be incorporated into the big data platform Red Sqirl. The jar should be be able to take parameters but ultimately it should be able to parse any JSON document into its constituent tables/rows/arrays on Hive. The parser then needs to be configured as a node on Red Sqirl. The bulk of the work is the development of the parser - configuring it as a node on Red Sqirl is reasonably straightforward.
DOMAIN : BIG DATA AND HADOOP TITLE : REAL TIME PROJECT - INSURANCE LANGUAGE : JAVA VM : CLOUDERA QUICKSTART VM 5.5 IDE : ECLIPSE IDE ABSTRACT : Analyze health reports across years for the US market and find the average of privately and public insured people from years 2001-2011. The Project was processed by MapReduce method and output achieved.
Hi, i'm Ferrari Emanuele from italy; i want to buy hash power from user connected to website to create a remote monero miner website (for example linked to coin-hive or other better) with referral program, stat, and automatic payment for users (so i need to measure hashing power to each user). I need to have a registration user, admin page and user page. I want simply design, simply description but it must be look like a professional website. more info in PVT
I've done my engineering with a specialization in Electronics and Communication. As I am much interested in Big data Domain I gained a certification in it. Project is all about data analysis and tools required are Hive, Pig and Sqoop wherein HDFS is used for data storage and MapReduce framework is used for processing.
Looking for a trainer to teach hadoop and big data concepts in our institute in Hyderabad
I would like to build a hql script which gets data from two tables, do some intermediate to complex transformations by pulling the data from production, then save/update the data back to production. This hql will involve 3-4 staging/temp tables.
It is regarding Bigdata hardtop + spark project So right now we are loading data from Hive tables to azure sql tables with SSIS package for 2 million records it is taking 15 mints ! I want to try with spark again is spark best fit or not ?
Talented IT professional with twelve years of experience in design, development on Big Data Hadoop,Spark, Kafka,No SQL, Data engineering platform and Microsoft platforms with object oriented, scripting languages. As big data technical leader has knowledge of multiple scripting and programming languages. Possesses strong abilities in big data environments and is extremely analytical with excellent problem-solving. Proven ability to troubleshoot and solve complex problems, lead small project groups. Demonstrated initiative to learn new technologies and exploration. Strong sense of "do it right the first time", meticulous attention to detail, and ability to communicate technical knowledge to peers and support personnel. Having experience in complete SDLC and handled build and release management activities. Particular key strengths include: Excellent interpersonal and communication skills to work with team. Leader and team player with proven conceptual, analytical and problem solving capabilities. Strong ability to multi-tasking while managing time and commitments effectively. Provided technical trainings and conducted workshops effectively for IT professionals to be successful in their projects through mentor ship program. Looking forward to serve your needs on Big Data technologies Highly motivated and ready to explore new technologies to provide quicker solutions Very much interested to take up assignments in startup organizations in the fields like IOT and Big Data congruence
Hi , I want to write a Scala Programme which does the following. 1. Reads from a Kafka Source (Using Kafka D-Streams). 2. Applies some functions to the incoming data like filter. 3. Writes back to another Kafka Topic. Further details will be provided later. Gist of the programme is [url removed, login to view] needs to employ best of the scala optimizations and should work efficiently.