How to Work with Apache Spark on your On-Premise Cluster of Computers?
It is important to be able to run Spark locally on your locally available computer cluster.
Many real-world datasets cannot be analysed on your laptop or a single desktop computer.
Some main ways of doing this include:
- Automating private bare-metal NUC cluster setup with Cobbler
- Installing Spark-Hadoop-Yarn-Hive-Zeppelin without Root Access
- Some Setups for your On-Premise Cluster of Computers
- Setting up networking
- Setting up a Ubuntu Server Master
- BASH your own Spark-Yarn-HDFS Cluster
- Under Study/Work: kubernetes for container orchestration to create on-premise or public cloud clusters