Editor’s take note: This is actually the second of a two-part series on business intelligence tools. If you’re utilizing a business intelligence tool to answer them, you’re likely to have problems. Decisions and conversations about technology need to take into account the demand-side. But answering these types of questions requires going beyond the pooling of costs from the general ledger as done by business intelligence tools. Actually, it needs a sophisticated “dis-aggregation” or rules-based apportioning of pooled costs among the eating technology or business unit elements.
This routing of costs must be defensible and explainable to technology owners or business device leaders to be able to earn their buy-in and reliance for business decisions. Unfortunately, business cleverness tools lack the necessary business logic for allocating costs in an smart fashion, routed by technical human relationships and weighted by actual consumption data. An alternate approach on IT charging with business cleverness tools involves the concept of rate cards. The idea is to work around having less allocation reasoning by assigning rates to each type of element in the IT source chain and then using unit volumes to determine and aggregate resulting costs.
This typically begins with the regular (often once/yr) establishment of rates that define the price tag on each discrete unit of various IT offerings. For instance, rate credit cards might be described for models like “desktop compute user” or “megabyte of storage.” Rate credit cards define prices that derive from analysis of prior year’s costs plus expected cost development. Then, each month, updated IT operational data is loaded into the business intelligence system to enumerate the IT resources. These units are costed by applying rates from the speed card. The system then aggregates the resulting costs to look for the total cost of applications or IT services.
On the top, business cleverness tools seem a good fit because of this rate card approach because they provide high performance aggregation of the base metrics into views by technology stacks, applications, IT services, or even business units. However, in practical, real-world usage, business intelligence tools fall short. The flaw with this process is that it offers only an estimation of costs based on rate credit cards that may have been calculated weeks before, and may exclude hidden costs from shared or resources over head.
Or it could include shared or overhead resources, but in a way that results in over-counting. This is not to say that rate cards are bad inherently, but when the basis is formed by them of most cost calculations, they’re certain to yield inaccurate costs almost. Most organizations that establish rate cards neglect to fully account for shared or indirect costs such as datacenter power and cooling, telecom, support labor, and so on.
It’s tempting to rely on already-deployed business intelligence platforms to determine IT costs, but a major weakness of several of these is their need for data to be sanitized prior to getting into the data warehouse layer. In addition to financing data, the cost calculations described earlier depend on incorporating IT functional data such as it business structure (business units, headcount), technology stack, program or IT service definitions, unit quantity or usage data. Unfortunately, most of these data resources aren’t well organized for integration with one another, and things are further complicated when large organizations have multiple types of tools credited to size, organizational variety, or acquisition background.
When these disparate data models are brought together, defects emerge. Many organizations have seen their implementations of the tools stalled because the source data is not clean enough, and projects to completely clean them up are complicated and expensive. While business intelligence platforms purport to save time by automating complicated metric aggregations and exposing pre-calculated results via flexible reports, many suffer from a critical flaw.
The selection of questions that can be replied by those reports is bounded by what was expected when the reports and underlying data schema were designed. Which means that either the reviews themselves are static, or that the statement filter systems and slicers have limited ability to recalibrate to answer tangential questions or drilldowns.
- Answering service that is needed to perform your real estate business
- Each item may necessitate some research but is needed by those who will assess the business plan
- Product Catalogues
- Genuine performing is
- Understanding the Sales Letter
- 07-16-2019, 12:59 AM
When an professional inevitably asks a question that was not anticipated during design of the cubes and reviews, the business intelligence system often cannot offer an answer. The only treatment is to go back to the drawing board for design refinements, adding months or weeks to your choice cycle. Of course, this leads back again to decisions based on instincts or estimates because fact is not readily available. It is said that change is the only continuous often. But these operational systems don’t manage change well.
Unfortunately, many business intelligence platforms do not manage well with changes to the fundamental data or the business use of their output. Adding or Changing data resources requires the engagement of experts to capture, clean, and ingest the new data, and data warehouse experts to change the schema to support the data.