All configuration in Atlas uses java properties style configuration. The main configuration file is application.properties which is in the conf dir at the deployed location. It consists of the following sections:
This section sets up the graph db - titan - to use a persistence engine. Please refer to link for more details. The example below uses BerkeleyDBJE.
atlas.graph.storage.backend=berkeleyje atlas.graph.storage.directory=data/berkley
Basic configuration
atlas.graph.storage.backend=hbase #For standalone mode , specify localhost #for distributed mode, specify zookeeper quorum here - For more information refer http://s3.thinkaurelius.com/docs/titan/current/hbase.html#_remote_server_mode_2 atlas.graph.storage.hostname=<ZooKeeper Quorum>
HBASE_CONF_DIR environment variable needs to be set to point to the Hbase client configuration directory which is added to classpath when Atlas starts up. hbase-site.xml needs to have the following properties set according to the cluster setup
#Set below to /hbase-secure if the Hbase server is setup in secure mode zookeeper.znode.parent=/hbase-unsecure
Advanced configuration
# If you are planning to use any of the configs mentioned below, they need to be prefixed with "atlas.graph." to take effect in ATLAS Refer http://s3.thinkaurelius.com/docs/titan/0.5.4/titan-config-ref.html#_storage_hbase
Permissions
When Atlas is configured with HBase as the storage backend the graph db (titan) needs sufficient user permissions to be able to create and access an HBase table. In a secure cluster it may be necessary to grant permissions to the 'atlas' user for the 'titan' table.
With Ranger, a policy can be configured for 'titan'.
Without Ranger, HBase shell can be used to set the permissions.
su hbase kinit -k -t <hbase keytab> <hbase principal> echo "grant 'atlas', 'RWXCA', 'titan'" | hbase shell
This section sets up the graph db - titan - to use an search indexing system. The example configuration below sets up to use an embedded Elastic search indexing system.
atlas.graph.index.search.backend=elasticsearch atlas.graph.index.search.directory=data/es atlas.graph.index.search.elasticsearch.client-only=false atlas.graph.index.search.elasticsearch.local-mode=true atlas.graph.index.search.elasticsearch.create.sleep=2000
Please note that Solr installation in Cloud mode is a prerequisite before configuring Solr as the search indexing backend. Refer InstallationSteps section for Solr installation/configuration.
atlas.graph.index.search.backend=solr5 atlas.graph.index.search.solr.mode=cloud atlas.graph.index.search.solr.zookeeper-url=<the ZK quorum setup for solr as comma separated value> eg: 10.1.6.4:2181,10.1.6.5:2181
Refer http://s3.thinkaurelius.com/docs/titan/0.5.4/bdb.html and http://s3.thinkaurelius.com/docs/titan/0.5.4/hbase.html for choosing between the persistence backends. BerkeleyDB is suitable for smaller data sets in the range of upto 10 million vertices with ACID gurantees. HBase on the other hand doesnt provide ACID guarantees but is able to scale for larger graphs. HBase also provides HA inherently.
Refer http://s3.thinkaurelius.com/docs/titan/0.5.4/bdb.html and http://s3.thinkaurelius.com/docs/titan/0.5.4/hbase.html for choosing between the persistence backends. BerkeleyDB is suitable for smaller data sets in the range of upto 10 million vertices with ACID gurantees. HBase on the other hand doesnt provide ACID guarantees but is able to scale for larger graphs. HBase also provides HA inherently.
Refer http://s3.thinkaurelius.com/docs/titan/0.5.4/elasticsearch.html and http://s3.thinkaurelius.com/docs/titan/0.5.4/solr.html for chossing between ElasticSarch and Solr. Solr in cloud mode is the recommended setup.
For switching the storage backend from BerkeleyDB to HBase and vice versa, refer the documentation for "Graph Persistence Engine" described above and restart ATLAS. The data in the indexing backend needs to be cleared else there will be discrepancies between the storage and indexing backend which could result in errors during the search. ElasticSearch runs by default in embedded mode and the data could easily be cleared by deleting the ATLAS_HOME/data/es directory. For Solr, the collections which were created during ATLAS Installation - vertex_index, edge_index, fulltext_index could be deleted which will cleanup the indexes
Switching the Index backend requires clearing the persistence backend data. Otherwise there will be discrepancies between the persistence and index backends since switching the indexing backend means index data will be lost. This leads to "Fulltext" queries not working on the existing data For clearing the data for BerkeleyDB, delete the ATLAS_HOME/data/berkeley directory For clearing the data for HBase, in Hbase shell, run 'disable titan' and 'drop titan'
The higher layer services like lineage, schema, etc. are driven by the type system and this section encodes the specific types for the hive data model.
# This models reflects the base super types for Data and Process
atlas.lineage.hive.table.type.name=DataSet atlas.lineage.hive.process.type.name=Process atlas.lineage.hive.process.inputs.name=inputs atlas.lineage.hive.process.outputs.name=outputs ## Schema atlas.lineage.hive.table.schema.query=hive_table where name=?, columns
Refer http://kafka.apache.org/documentation.html#configuration for Kafka configuration. All Kafka configs should be prefixed with 'atlas.kafka.'
atlas.notification.embedded=true atlas.kafka.data=${sys:atlas.home}/data/kafka atlas.kafka.zookeeper.connect=localhost:9026 atlas.kafka.bootstrap.servers=localhost:9027 atlas.kafka.zookeeper.session.timeout.ms=400 atlas.kafka.zookeeper.sync.time.ms=20 atlas.kafka.auto.commit.interval.ms=1000 atlas.kafka.hook.group.id=atlas
Note that Kafka group ids are specified for a specific topic. The Kafka group id configuration for entity notifications is 'atlas.kafka.entities.group.id'
atlas.kafka.entities.group.id=<consumer id>