How do I scale my stack as my data grows?


If a user deploys Hadoop and Drill, one of the key aspects is scalablility. As the data grows so does the server requirements, can a user scale the platform easily?


Some charms scale, some don’t. For example, currently Saiku only works in a single node configuration, we’re working on fixing that for Saiku 4, but to improve performance for Saiku currently you need to stick it on a bigger machine using the constraints.

For components like Hadoop Slave’s, HBase, Namenodes etc, you can scale Hadoop components on the fly:

juju add-unit -n

you can turn nodes off also

juju remove-unit -n