I have hand-on experience with Hadoop and spark .
Hadoop :- Working on extracting data from RDBMS using sqoop in datalake ('HIVE') , processing data in Hive layer and exported manipulated data back to RDBMS.
SPARK:- Getting data from imation system using kafka confluent (GCP), creating dynamic schema from the csv file and performing upsert in Hive layer.
Knowledge of Azure, StreamSet, Hortonworks,oozie, elasticsearch and other big data technologies.