Apache Falcon - vidyasekaran/bigdata_frameworks_components GitHub Wiki
Falcon:
-more like a scheduling and execution engine for HDP components like Hive, Spark, hdfs distcp, Sqoop to move data around and/or process data along the way. In a way Falcon is a much improved Oozie.
-metadata of Falcon dataflows is actually sinked to Atlas through Kafka topics so Atlas knows about Falcon metadata too and Atlas can include Falcon processes and its resulting meta objects (tables, hdfs folders, flows) into its lineage graphs.
I know that in the docs both tools claim the term 'data governance', but I feel Atlas is more about that then Falcon is. It is not that clear what Data Governance actually is. With Atlas you can really apply governance by collecting all metadata querying and tagging it and Falcon can maybe execute processes that evolve around that by moving data from one place to another (and yes, Falcon moving a dataset from an analysis cluster to an archiving cluster is also about data governance/management)