Computer engineer expert in Big data.
Throughout my professional career I have developed my expertise in implementing data storage, transformation and visualization solutions. Highlight the development of real-time flows and service APIs to help business needs.
Going into detail about my professional career, I have the following experiences:
Design of architectures based on distributions: Cloudera/Hortonwork and Azure Cloud.
Distributed data storage and processing:
DFS, Hive, Impala and Kudu.
NoSQL Database: MongoDB, Couchbase, Cassandra.
Data ingestion in Big Data environments: Apache Nifi, Flume,
Processing data in memory in Batch processes and
Streaming: Apache Spark.
User Tools: Hue, Oozie and Sqoop.
Indexing and information search: ELK, SOLR.
Maching Learning Project Methodology for Models
Programming languages: Java, Scala, Python, R,
Knowledge in data processing libraries:
Numpy, Pandas, sickit-learnig.
Knowledge of network calculation tools
Neural TensorFlow and Keras.
Knowledge in Anaconda and Jupyter Notebook.
Knowledge in Azure, Databricks.
I consider myself a proactive person, among other things, looking for solutions in the world of data, thus improving my analytical capacity, in order to add value to other projects in the field.