Getting Started with Flink
Install Apache Flink on your machine now and get started with Flink today.

Wipe the slate clean and learn Flink from scratch
Introduction to Apache Flink
Flink Environment Setup- Ubuntu
Flink Environment Setup- Windows
Flink Environment Setup- Multinode Cluster
Why Apache Flink- its Features
A Comprehensive Guide- Flink History, Architecture, Features
Flink Ecosystem Components
Best Apache Flink Books

Level up to more exciting and challenging chapters
Flink Shell Commands
Flink Streaming Windows
Setting up a Flink Cluster on CentOS
Complex Event Processing in Flink
Create a Flink Application in Java Eclipse
Flink Wordcount Program

Master new skills and evolve as an expert
Apache Flink – A Big Data Processing Framework
Flink Use Cases: Real-life Case Studies
Big Data Use Cases: Hadoop, Spark, and Flink Case Studies
Flink Use Case- Crime Data Analysis- Part 1
Flink Use Case- Crime Data Analysis- Part 2
Hadoop + Flink Compatibility
Flink vs Spark
Flink vs Spark vs Hadoop
Exploring the Framework
Let’s take a look at some facts about Flink and its philosophies.
The creators of Flink were on a university research project when they decided to turn it into a full-fledged company. They founded data Artisans in 2014 as an attempt to build a large-scale data processing technology which is both open-source and rooted in long-tested principles and architectures. Flink is an open-source stream-processing framework now under the Apache Software Foundation. It is built around a distributed streaming dataflow engine which is written in Java and Scala, and executes arbitrary dataflow programs in a way that is parallel and pipelined.
Programs in Java, Scala, Python, and SQL automatically compile and optimize into dataflow programs which we then execute in a cluster or cloud environment.

data Artisans