Apache Storm is a distributed real-time computation system. Storm makes it easy to reliably process unbounded streams of data, doing for real-time processing what Hadoop did for batch processing. The process is essentially a DAG of nodes, which is called topology.
Apache Atlas is a metadata repository that enables end-to-end data lineage, search and associate business classification.
The goal of this integration is to push the operational topology metadata along with the underlying data source(s), target(s), derivation processes and any available business context so Atlas can capture the lineage for this topology.
There are 2 parts in this process detailed below:
A data model is represented as Types in Atlas. It contains the descriptions of various nodes in the topology graph, such as spouts and bolts and the corresponding producer and consumer types.
The following types are added in Atlas.
The Storm Atlas hook auto registers dependent models like the Hive data model if it finds that these are not known to the Atlas server.
The data model for each of the types is described in the class definition at org.apache.atlas.storm.model.StormDataModel.
Atlas is notified when a new topology is registered successfully in Storm. Storm provides a hook, backtype.storm.ISubmitterHook, at the Storm client used to submit a storm topology.
The Storm Atlas hook intercepts the hook post execution and extracts the metadata from the topology and updates Atlas using the types defined. Atlas implements the Storm client hook interface in org.apache.atlas.storm.hook.StormAtlasHook.
The following apply for the first version of the integration.
The Storm Atlas Hook needs to be manually installed in Storm on the client side. The hook artifacts are available at: $ATLAS_PACKAGE/hook/storm
Storm Atlas hook jars need to be copied to $STORM_HOME/extlib. Replace STORM_HOME with storm installation path.
Restart all daemons after you have installed the atlas hook into Storm.
The Storm Atlas Hook needs to be configured in Storm client config in $STORM_HOME/conf/storm.yaml as:
Also set a 'cluster name' that would be used as a namespace for objects registered in Atlas. This name would be used for namespacing the Storm topology, spouts and bolts.
The other objects like data sets should ideally be identified with the cluster name of the components that generate them. For e.g. Hive tables and databases should be identified using the cluster name set in Hive. The Storm Atlas hook will pick this up if the Hive configuration is available in the Storm topology jar that is submitted on the client and the cluster name is defined there. This happens similarly for HBase data sets. In case this configuration is not available, the cluster name set in the Storm configuration will be used.
In $STORM_HOME/conf/storm_env.ini, set an environment variable as follows:
where ATLAS_HOME is pointing to where ATLAS is installed.
You could also set this up programatically in Storm Config as:
Config stormConf = new Config(); ... stormConf.put(Config.STORM_TOPOLOGY_SUBMISSION_NOTIFIER_PLUGIN, org.apache.atlas.storm.hook.StormAtlasHook.class.getName());