Today I am excited to announce we have added a new capability to our Managed Cloud Hadoop and Spark Service. In the past our users would provision their Hadoop or Spark clusters based on some pre-defined stacks we had created. While this worked for many of our users, there were others that wanted more flexibility and more control. To that end, we have been working for the last year on an entirely new provisioning system that would allow our customers to create fully customized stacks. This now means instead of settling for the default values for number or size of master services nodes, users now have the ability to define the individual node sized and how many of each. You want 3 name nodes instead of 2? No problem. You want 8GB of memory per name node instead of 4? No problem. You want to create a stack that has your ideal configuration of services, such as Spark with Kafka, and Zepplin, but not some of the other components that come by default? No problem.
For questions on our Big Data services, feel free to email me at email@example.com.
Read more on The Rackspace Blog.