Get in touch with us by booking a demo so that we lean more about your use case and answer your questions about the platform. We'll then invite you to a shared Slack channel that we will use for most of our interactions and for live support. We will send you instructions on Slack on how to get started -- the first step is to grant Data Mechanics scoped permissions on the AWS, GCP, or Azure account of your choice.
As soon as an application finishes, our dashboard provides you this information. For recurring applications (we call them "jobs"), you can also track the evolution of your Data Mechanics costs, along with other key metrics, over time. Finally, we produce a billing report with detailed information and a useful cost attribution breakdown at the end of each month.
Yes. At the end of the month, you will get one bill from the cloud provider and one bill from Data Mechanics. Your cloud credits will apply to the cloud provider bill, which typically makes up the larger portion of your costs.
We've achieved 50% to 75% cost reductions for customers who migrated their workloads from a competing platform. This is because we have features that will reduce your cloud costs (automated tuning of pipelines, a single shared cluster with fast autoscaling, spot nodes support), and because the management fee we charge is smaller than other Spark platforms too.
We've achieved 50 to 75% cost reductions for customers migrating their workloads from a competitor. If you'd like us to estimate the potential savings with more details, install our free and cross-platform monitoring tool Data Mechanics Delight on top of your current Spark platform, then get in touch with us by booking a demo. We will analyze the logs collected by Delight to estimate the savings we can generate for your Spark workloads.
You have control over the autoscaling behavior of the Kubernetes cluster and of each Spark application running on it. So you can, for example, set a maximum size for the cluster and for each unique Spark application.