Filter

My recent searches
Filter by:
Budget
to
to
to
Skills
Languages
    Job State
    66 jobs found, pricing in USD
    Spark Mlib 6 days left

    The project is about the writing a spark code for logistic regression, Naive baeys using MLib. for a data set. please contact

    $200 (Avg Bid)
    $200 Avg Bid
    1 bids

    Video Course on Big Data Hadoop. It would be screen recording and voice over. Should cover the course using practical examples. The recording will be approx 10 hrs

    $191 (Avg Bid)
    $191 Avg Bid
    6 bids

    I need you to write some articles on below topics in Microsoft word format just. Apache Spark Components: 1. Spark Core 2. Spark Sql just these two only not others above topics should be written in Microsoft word and the article shall be described from basic to complete details.  We are from e-learning company and the document will be used for tutorials. So details should be from basic to complete. Important points---- 1. all the article shall contains images along with explanation. 2. No copy paste form any website, write it by your own language like you are teaching someone. 3. If copy paste find then payouts will not happen.

    $22 (Avg Bid)
    $22 Avg Bid
    7 bids
    Hadoop Spark/Scala Project 4 days left
    VERIFIED

    Homework A) Using Hadoop hdfs & Spark-scala programming Source dataset: [url removed, login to view] download data for 1999,2000, 2001, 2002, 2003, 2004, 2005, 2006, 2007, 2008 1) Download and combine all data for the years specified about 2) Data Cleanup: Find and remove /filter out outliers & bad data 3) Perform statistics analyis on the data: counts /averages /sums / min /max 3) Using spark/scala programming on the entire dataset, what percent (%) is a) on-time flight b) cancelled flight c) Delays flights d)TOP 5 Causes of delays e) Most causes of flight delays f) Airlines with the most delays to a destination g) Airline with the most cancellations h) Airline with the most on-time i) Flight on-time / delays and cancellation national averages J) Perform some visualization in Tableau (Send me output data file,I will do visualisation myself) K) All of the above Code in a separate PDF file B) Create 10-15 pages (in word) to include the following topics: 1) Data source 2) Description the data and its schema 3) Data pre-processing required (parsing, filtering, etc.) 4) Any bad data issues encountered 5) Describe Your Spark algorithm 6) Describe any other ecosystem or additional tools used 7) Describe the output 8) How did you verify that your output is correct? 9) discuss the Performance/scale characteristics 10) what would you have done differently if you did this again? 11) Draw a conclusions from this excercise Please NOTE: This must be your original work. Someone else code cannot be copied from online and used in this project. Doing so will cause you an F grade in this course Deliverable Timeline: 1) Code in separate document -- Deliver by NOV 25 2) Documentation (10-15 pages in word) -- Deliver by NOV 27 3) Output dataset file --- Deliver by NOV 30 Deadline: NOV 30 for all of the above NB: Your personal hadoop cluster or I can provide access to cloud based hadoop cluster with data files already download onto HDFS folder

    $167 (Avg Bid)
    $167 Avg Bid
    6 bids

    I have 24 months of sales data for multiple items at multiple stores. I want to predict the future sales for next 12 months in the same structure as input dataset. End of the project I need

    $426 (Avg Bid)
    $426 Avg Bid
    34 bids

    Preciso que um projeto simples feito em java, utilizando o Framework SPARK SQP da Apache.

    $23 (Avg Bid)
    $23 Avg Bid
    2 bids

    • Expert coding skills in Scala, Python and scripting languages • Experienced in developing scalable web services and distributed real-time message processing systems (Kafka, Spark, etc.) • Distributed computing frameworks (Hadoop or Spark)

    $520 (Avg Bid)
    $520 Avg Bid
    27 bids

    Project Cost: 100-150 dollars. If delivered before time 50 dollars bonus. Project Description is attached. Technologies: AWS, Cloud Computing, Hadoop, Spark. Think before taking the assignment.

    $231 (Avg Bid)
    $231 Avg Bid
    8 bids

    I need some help re-designing by Spark-Submit code for PySpark based code base. Its a short task. But there is more work after this.

    $34 (Avg Bid)
    $34 Avg Bid
    5 bids

    We are Big data Startup company looking for Hadoop experts in HDP Platform Consultants who can work on an hourly basis on a need basis. 1. Hawq 2. HDFS Encryption [url removed, login to view] 4. Ambari 5. Atlas 6. Cloudbreak 7. Flume 8. Hive [url removed, login to view] 10. Kafka 11. Knox 12. Mahout 13. Oozie 14. Phoenix 15. Pig 16. Ranger 17. Slider 18. Solr 19. Spark 20. Sqoop 21. Storm 22. Tez 23. Hue 24. ZooKeeper 25. NiFi 16. WebHDFS

    $407 (Avg Bid)
    $407 Avg Bid
    14 bids
    Spark Developer Ended
    VERIFIED

    Should be able Architect,design and code with terabytes of data

    $14 / hr (Avg Bid)
    $14 / hr Avg Bid
    15 bids

    It is a project to Extract Information Using MapReduce. I will give the details later.

    $33 (Avg Bid)
    $33 Avg Bid
    11 bids

    I have a data lake application installed on centos servers. This application stack is of multiple services and master slave architecture. The goal is to dockerize the whole application. Due to the architecture of the application we would need to install multiple services on the same docker image. The projected necessary setup would involve building 2 docker images. Master image: Contains Main app webpage, Main app server, Postgres for the Main app, Accumulo, Hadoop/Spark, Zookeeper Slave image: Accumulo, Hadoop/Spark, Zookeeper Note: I already tried to work on the whole setup in docker swarm as multiple images. There were potential issues across hdfs/spark cluster. But the main challenge right now would be building a single images with the multiple services and make the whole master-slave setup work.

    $198 (Avg Bid)
    $198 Avg Bid
    9 bids

    Hey thanks in advance. I am looking for an expert on network-anomaly-detection. Cloud computing , big data analysis and simulation tools if you can too. Work will be start immediately if I find a good one. Thanks

    $54 / hr (Avg Bid)
    $54 / hr Avg Bid
    22 bids

    Do not apply if you a) do not have the relevant work permits for Germany b) Are unable to work on site for us. Please do not waste our time if you cannot provide the above two. We are looking for a scala developer with the following: 1) Knowledge of Hadoop systems 2) Knowledge of Akka and Play 3) Knowledge of data modelling 4) Knowledge of quants.

    $226 / hr (Avg Bid)
    $226 / hr Avg Bid
    4 bids

    We are Big data Startup company looking for Hadoop experts in HDP Platform Consultants who can work on an hourly basis on a need basis. 1. Hawq 2. HDFS Encryption [url removed, login to view] 4. Ambari 5. Atlas 6. Cloudbreak 7. Flume 8. Hive [url removed, login to view] 10. Kafka 11. Knox 12. Mahout 13. Oozie 14. Phoenix 15. Pig 16. Ranger 17. Slider 18. Solr 19. Spark 20. Sqoop 21. Storm 22. Tez 23. Hue 24. ZooKeeper 25. NiFi 16. WebHDFS

    $413 (Avg Bid)
    $413 Avg Bid
    19 bids

    Need help with some spark and cassandra related questions. Simple setup and demo.

    $25 (Avg Bid)
    $25 Avg Bid
    1 bids
    scala program Ended
    VERIFIED

    Need 2 scala programs to be completed in 1 day

    $23 (Avg Bid)
    $23 Avg Bid
    1 bids

    I have setup AWS cluster with Hadoop, Spark and Mongo DB nodes I just need help with ingesting raw data into Hadoop and Spark should read and process this data then store in Mongo DB. Setup ETL pipeline : Download and unzip the pageview dataset from a website Ingest the raw data into Hadoop Use Spark to read data from the Hadoop cluster and compute the per-language statistics Store the per-language statistics into MongoDB directly from Spark Write scripts for automation every month once

    $264 (Avg Bid)
    $264 Avg Bid
    16 bids

    I am looking for a male who is expert in SQL, PL/SQL, Hadoop, Sqoop, Hive, Hbase, pig, Kafka along with Data Warehousing background. Good experience in Database Design , ETL and Frameworks. Good working knowledge on Map Reduce, HDFS and Spark, Need good communication skills and available to take phone interview for a contracting position in US time zone during working hours. I will send the job description well in advance. There might be 2 phone interview rounds, and the funds will only be transferred only if the interview is success and the candidate is selected.

    $558 (Avg Bid)
    $558 Avg Bid
    18 bids

    Top Spark Community Articles