The Good Data BI platform is accessible through the stateless REST API. This HTTP-based API can be simply used from any 3rd party application as well as from a plain browser. The API provides the full power of our platform (we actually use it as the backend for our web frontend).
In fact the Good Data application consists of a handful types of services. Instances of these services can be dynamically added or removed (via simple HTTP load-balancing) on as needed basis. Add the Amazon EC2 cloud that allows us to add or remove a new machine and only pay for the CPU ticks that we really use. The net result is the great flexibility, scalability and cost efficiency.
The demo video below points at the fundamental architecture differences between our approach and some other on-demand BI vendors who simply deployed an existing BI package (e.g. Pentaho or MS Analytics) on the web (which unfortunately does not prevent their marketing from using the multi-tenant, SaaS mambo jumbo).
This video might help you to better understand the Good Data architecture. I apologize for no audio. Hopefully the simple step-by-step description below helps:
1. The /gdc suffix in the GDC BI platform URL shows the list of the REST API services that the platform provides.
2. Then we navigate to the metadata services that manage metadata for a selected BI project (the FoodMartDemo in our case).
3. We first show the FULL-TEXT SEARCH service. We specify the search term ("sales") directly in the service's URL. The list of matching results is shown.
4. We select one of the reports from the search result to inspect the report's definition. We can spit out the definition in many formats (e.g. JSON, YAML, ATOM, or XML). We use YAML as the default.
5. Then we demonstrate the metadata QUERY service. We list all reports in the FoodMartDemo project. We again inspect one of the reports: Salary by Year and State.
6. Then we are going to demonstrate the using service that shows us all dependencies (metadata objects that the selected report references) of the report. For example the report depends on it's definition (reportDefinition) object. We copy and paste the link of the report definition to the browser URL bar to inspect the report definition object structure. It contains all attributes and metrics that the report displays (all inner objects have their URLs too, so we could continue investigating them).
7. Then we navigate to the XTAB service. The XTAB can execute and cross-tabulate (or pivot if you like) the report's definition. We supply the report definition URL and it spits out the representation of the report result (you can see the the machine representation of the report's data). Notice the asynchronous processing here.
8. Then we go back to the original report Salary by Year and State. The report contains a reference to it's result.
9. We will copy and paste the result's URL to the EXPORTER service that returns (again asynchronously) the report result's data in MS Excel format.
If you have the Good Data platform demo account, you can try this script yourself at http://demo.gooddata.com/gdc (hint - you'll need to take a look at the LOGIN service).
Showing posts with label stateless. Show all posts
Showing posts with label stateless. Show all posts
Wednesday, November 5, 2008
Wednesday, March 19, 2008
Just-in-Time vs. Just-in-Case BI Costs
Do you know how much power your BI really needs? More precisely, how much power it needs today at 9 AM, next weekend, and at the last day of the quarter or year? Have you bought the ultra-super-duper machine that handles even the highest usage spikes with ease? Or have you decided to sacrifice performance during these peak hours? Do you wait or waste?
Gooddata approach to this dilemma can be described with two keywords: Stateless & Virtualized.
The Stateless is about our architecture. Our product relies on six generic stateless services. The stateless is important for scalability. We can dynamically add any of the six generic service instances as we need to increase throughput of our BI platform.
Virtualized is how these services are deployed. Virtualization allows us to flexibly add hardware nodes to our computing cloud. We have images of different virtual nodes on hand. We can create a new node and dynamically add it to our computing cloud. The beauty is that this all can happen in just few minutes. And the decommissioning of such node is even faster.
We (and you, as our customer) pay for CPU ticks and storage, so the Stateless & Virtual gives you unmatched cost efficiency. Gooddata offers you access to unlimited computing resources. You can get as much of CPU, storage and network bandwidth as you need. And you pay only for what you are really consuming.
Pay for your BI project on Just-in-Time not on Just-in-Case basis.
Gooddata approach to this dilemma can be described with two keywords: Stateless & Virtualized.
The Stateless is about our architecture. Our product relies on six generic stateless services. The stateless is important for scalability. We can dynamically add any of the six generic service instances as we need to increase throughput of our BI platform.
Virtualized is how these services are deployed. Virtualization allows us to flexibly add hardware nodes to our computing cloud. We have images of different virtual nodes on hand. We can create a new node and dynamically add it to our computing cloud. The beauty is that this all can happen in just few minutes. And the decommissioning of such node is even faster.
We (and you, as our customer) pay for CPU ticks and storage, so the Stateless & Virtual gives you unmatched cost efficiency. Gooddata offers you access to unlimited computing resources. You can get as much of CPU, storage and network bandwidth as you need. And you pay only for what you are really consuming.
Pay for your BI project on Just-in-Time not on Just-in-Case basis.
Labels:
architecture,
bi,
business intelligence,
cloud,
cost,
grid,
stateless,
virtualized
Subscribe to:
Posts (Atom)