Feeds:
Posts
Comments

Reduce Transactional System Complexities to Fund Your Next Innovation

Businesses today cannot operate without data—not even for a moment.

  • Businesses once had the upper hand, but today consumers have gained tremendous power. Increased choices, lower switching costs and easier access to product information have empowered customers to make more informed decisions and compare alternatives more easily. It has become extremely important for organizations to understand and anticipate customer behavior and needs using all available sources of information, including social media.
  • The situation is further complicated as organizations are expected to do more with less. Organizations need to optimize their processes and IT resources to create new opportunities, to mitigate risk, and to increase efficiency.
  • Every day, incredible amounts of diverse data are being generated, ranging from online clicks, transactions and machine-generated sensor data to social media posts, emails and videos. Businesses understand that collecting, processing and embedding this constantly growing stream of both structured and unstructured data into daily operations is key to meeting emerging challenges and uncovering new opportunities. In short, analyzing big data brings success. This is the new reality. Embracing analytics is now a requirement for successful organizational performance.

What is needed for IT organizations to meet today’s top challenges?

  • Handle more data, faster
  • Simplify set up, use and maintenance
  • Support existing systems
  • Use existing skills and don’t require application code changes

In short, make it super fast, super easy…   …and have it deliver super savings.

PureData System for Transactions is a highly available, large scale database appliance that helps you reduce time, effort, cost and risk to design, procure, integrate, and deploy highly available transactional database services.

Database Appliance:

  • Reduce time, work effort, cost and risk to design, procure, integrate, and deploy non-stop data services
  • Fast deployment of high availability clusters and databases

High Availability / Scalability:

  • Improve uptime and reduce downtime costs
  • Simplified disaster recovery
  • Scale out to handle growing data

Infrastructure Efficiency:

  • Consolidate many databases onto a single system
  • Reduce data center costs: space, power, cooling
  • Reduce storage costs

IT Administrator Productivity:

  • Application transparency; no application changes
  • Simplified self-management lowers IT staff time
  • Leverage existing skills

Find out more about PureData System for Transaction. Attend one of our seminars on: “Reduce Transactional System Complexities to Fund Your Next Innovation.” Click here to see the dates, locations and agenda.


Oh no! The next “big data” project is coming!

Your IT infrastructure has grown and evolved over many years and is the heart and soul of how your company operates. You’ve invested years to get it to where it is today – it’s running smoothly, and you consider your IT staff to be the very best at what they do. But deep in your heart you have an uneasy feeling…you know what’s coming next.

There is a backlog of projects on your plate. Important projects that will improve your company’s bottom line. First-mover projects that will tap into “big data” and empower your line of business managers to pursue new markets and get a jump on your competitors. But you know that your infrastructure can’t handle much more and that your staff can’t keep up with performance tuning and the few projects that are currently in the works.

The CEO just requested a meeting for next week. You know another significant big data project is coming, and this is just the tip of the iceberg for what’s coming later this year. On your drive home you ask yourself, “How can I possibly take on more projects, and more data? How can I change my infrastructure so I can deploy new applications faster? How can I shift my staff from tuning and maintenance to focus on higher value work?”

Workload optimization with expert integrated systems

As more servers, storage and software components have been brought into the data center, complexity has risen to the point of being almost unmanageable. General purpose systems have been forced to handle multiple workloads, and teams of database, application and system administrators spend a great deal of time and effort configuring, tuning and maintaining the systems for top performance and efficiency. With such a complex infrastructure, reliability often suffers and system downtime becomes a serious business risk.

At the crux of the issue is that different applications have different data workload characteristics, placing often conflicting requirements on the hardware, storage and software. Transaction and analytic processing tasks constitute very different workloads. Unless your data workloads are modest with respect to characteristics like data volume, number of users and analytics complexity, you need systems optimized in different ways to efficiently meet big data challenges.

Typically, IT organizations purchase general purpose systems that are not optimized for any workload – systems that are general purpose in nature. They tune these systems for one workload or the other, and spend considerable time and effort keeping the system tuned.

But what is good tuning for one type of workload is not good for another. Data retrieval optimizations that benefit one access path are likely to penalize alternative paths. The structural elements that were optimized for transactions, for example – indexes, shared memory, locks, caches, etc. – all impose performance and complexity penalties in an analytic environment, where unpredictable (“against the grain”) access paths and patterns are the rule.  Systems that are optimized to handle structured data are different than those that handle a wide variety of unstructured or structured data.

Separating transaction processing and analytic processing onto separate, workload-optimized systems helps ensure that overall performance is optimized. Data transaction systems can process large numbers of simple look-ups, while analytic systems execute complex queries on massive volumes of data.

There is a great opportunity to improve system performance and efficiency, and to accelerate solution deployment, by using expert integrated systems that come from the factory already optimized for specific workloads. And this is why IBM designed and built the PureData System with different models that are specifically optimized for different transaction processing and analytic workloads.


Thank you Vincent for sharing this interesting article on predicting future home values. http://www.analyticbridge.com/profiles/blogs/here-s-what-your-home-will-be-worth-in-12-months?goback=.gde_4520336_member_226716431

I’d like to tell a story, though, that may help predictive modelers take extra caution.

If anyone understands the “real” state of the housing market, this is one market where predictive models can fail miserably, and Zillow is not immune. Predictive models and forecasting can work well in normal market conditions and can be indicative in “almost normal” markets where major influencing factors can be well understood, as well as when the forecast period is relatively short.

housing bubble

Housing bubble

But trying to use predictive models in abnormal markets, when core assumptions on predictive elements may be inaccurate, when new, complex relationship factors are not well understood, when one-time disruptive events cannot be assessed…well, that’s where things can break down in a hurry. Bottom line: make sure your model is sound, considers the right variables, and is tested and retested over time before you rely on its results.

Zillow’s model:

The Zillow Home Value Index is the median value of a home for an area. The Zillow Home Value Forecast is Zillow’s prediction of what the Zillow Home Value Index will be one year from now. Zillow uses data on a number of housing indicators as well as general economic indicators. The housing indicators include the mortgage interest rate, property tax rate, construction costs, number of vacant homes, percentage of loans that are subprime, percentage of delinquent loans and supply of homes for sale. The general economic indicators include the change in household income, population growth and unemployment rate.

Underlying factors that make this real estate market anything but “normal”:

The FED‘s easy money policies of the past (which affect the financial markets across the board) created excess money that found its way into the hype of the dot-com’s. When the truth was exposed, the dot-com bubble burst. Over time, continued easy money policies kept excess money in play, which found its way into real estate. Lenders saw a once-in-a-lifetime opportunity, threw risk to the wind and created sub-prime mortgages for the ‘high risk of default’ market.

The gov’t also saw an opportunity for political pursuits (and yes, for economic growth), and they created massive programs that encouraged home ownership for lower income households. These two factors had a multiplicity affect, raising the % of home ownership from ~65% to ~69% in a relatively very short order. The feeding frenzy caused a very abnormal increase in prices, and countless homeowners tapped into their equity to finance their high consumption lifestyles.

Lenders saw an even greater opportunity to package up loans, securitize them (turn them into stocks, essentially), and sold big bundles of them to unsuspecting investors, oh ya, with regulatory agency encouragement. Now, lenders freed up their capital reserves to do more loans, and they had sold their risk (especially sub-prime risk) to others. Happy times for lenders, happy times for the gov’t, and happy times for homeowners…until the real estate bubble burst.

Real estate prices fell rapidly. Real people lost their jobs, tightened their belts and curbed consumer spending, affecting the entire economy. Delinquencies increased (go figure!), foreclosures increased (go figure!), unemployment climbed (go figure!). Lenders tightened lending standards too much. The gov’t bailed out many who had their hands in the cookie jars. But the FED kept on truckin with easy money, abnormally low interest rates, economic easing and bond buy-backs to try to stimulate a crippled economy. (Now, the bubble is gov’t debt, but that’s another story.)

So, today’s real estate market: First, lenders are not processing delinquencies and foreclosures in the way they should, because they don’t want to show any more homes on their books. Many families (more than you can believe) have lived scott-free for 12-18-24 months or more – not paying their mortgage, knowing that their lender is not pressing forward with their foreclosure. Second, lenders are slow to process short sales (lender allows the homeowner to sell for less than the outstanding principal, so the lender does not get fully paid out at closing). Third, lenders (and the gov’t – HUD, etc.) already have a huge inventory of REO properties (called the shadow inventory), and they certainly don’t want any more.

But here’s the rub…they are not processing these properties and not putting them back into the market as they should… and THIS is causing an inventory shortage in some areas (e.g. So Calif), causing prices to climb back up. Think of the affect this has on a predictive model – what would happen to home prices if the lenders and gov’t dumped their shadow inventory? Some say the banks want prices to go back up before they dump their inventory. I think bigger, hidden motives are underlying this move. The media and politicians have jumped all over this, claiming that the real estate market is in full recovery.

Looks good for the real estate market. Looks good for the economy.  Looks good for lenders, the FED and the gov’t. “Things have improved. The housing market is firmly on its road to recovery. The housing market is strong. etc.”

But it’s all artificially created, resulting in a highly abnormal market with extreme uncertainties – the FED’s ultra-low interest rates, the FED’s easing policies and bond buy-backs, still strict lending standards, lender’s immediate resale of a mortgage to free up more capital for more loans (retaining absolutely zero risk), CDO rating agency conflicts of interest, huge shadow inventory kept out of the market. Also, huge hedge funds (example here) are now getting into real estate and buying up thousands of residential properties, for cash, completely disrupting local markets and typical real estate investor business models (who, by the way, offer a tremendous stabilizing value to the real estate market as a whole, but are much maligned by the media, lenders and the gov’t). What affect do these hedge funds have in the predictive model?

And oh, let’s not forget about the real unemployment rate (see www.bls.gov and http://www.bls.gov/data/#unemployment …the gov’t and media should report the U6 number, not U2…PLEASE, go to bls.gov and read about the real U6 unemployment situation…don’t rely on the media), the current trends on personal income and consumption, total consumer debt obligations, changes in payroll taxes and other taxes schedules and loopholes, the affect of Obamacare on household incomes, etc etc etc. And let’s throw in the changing demographics and psychographics of our population – age, income, employment uncertainty, frequency of moving, disposition to owning vs. renting, and how everyone is processing the never-ending stories that seem to come out of the media and our politicians.

OK, enough of all that…back to predictive models and forecasting. How many variables mentioned above are taken into account in the Zillow predictive model on home prices?? Are they using U6, not U2 unemployment numbers? Are they considering the shadow inventory of REOs and the affect this has on local inventory and prices, and when lenders might dump them back into the market, disrupting both inventories and prices? What are their assumptions on the FEDs policies and how long they will keep interest rates artificially low? Are they accounting for the dramatic volume of purchases by hedge funds, and how that affects local inventories and prices?

I honestly do not know the details of how Zillow accounts for all the data relationships in their model, but I hope they consider more than U2 unemployment numbers, basic inventory numbers of listed homes, current interest rates, and average sales price.

Point is, there is a LOT to consider in developing a predictive model, and there are often many underlying factors to consider. Only a few (unpopular) economists predicted the extent of the real estate bubble and its unsustainable track. And even fewer can predict its recovery, given the introduction of many more pervasive underlying factors that were not considerations a decade or so ago. The real estate market is in unchartered territory…be careful about any predictive models…and be careful what you read in the media!


Big data requires extreme workloads

Read Using Big Data for Smarter Decision Making by Colin White.

Big data involves more than just the ability to handle large volumes of data. It also represents a wide range of new analytical technologies that opens up new business possibilities. But before reaping the rewards of big data analytics, there comes a set of challenges around deploying new technologies into existing data warehouse environments and providing systems that optimize computing performance for different workloads.

As I explored in my recent posts on smart consolidation, the data warehousing and analytics environment is more complex today than even just a few years ago. Many have found that mixing operational analytics and deep or advanced analytics on the same system brings significant challenges to performance and meeting SLAs. With operational analytics, business managers need continuous data ingest and fast access to standard reports with the ability to perform ad hoc queries that drill down into the data and provide new perspectives and insight. When a deep analytical query comes along that requires significant data volumes and extreme computing resources, operational query performance suffers. Big data adds yet another complexity around data sources, data quality, longevity of the data, and whether some of the big data should be integrated into the enterprise data warehouse for longer-term historical analysis.

The best way to handle these different types of workloads is to optimize systems to the workload, and combine these solutions with the enterprise data warehouse to create an “analytical environment”. We see many types of optimized systems in the market today – data warehouse appliances, data marts, noSQL systems, Hadoop-based systems, streaming data analytical systems, cloud-based solutions, etc., that complement (not replace) the enterprise data warehouse. Each system is optimized for a specific workload, and used together they can help streamline and provide fast response to a wide variety of business needs.

A majority of organizations today already understand this – really, optimizing computing resources to various types of data and associated workloads is nothing new. At some point in the data warehouse and analytical environment evolution, organizations reach a tipping point that drives separation of data and workloads. Data growth and new sources of (traditional) data, an increased number of users, increased complexity of queries, and “big data” are all drivers of this tipping point.

Colin White of BI Research wrote a white paper exploring new developments in data warehousing and analytics and the benefits that analyzing big data brings to the business. The paper also reinforces this notion of optimizing systems based on the types of data and workloads. The conclusion – integrating these systems together into a single analytics infrastructure drives smarter and faster business decisions. Read Using Big Data for Smarter Decision Making.


A modern data warehousing and analytics architecture

Consider the example of a credit card company.When a customer applies for a credit card, the sales department collects the customer’s details and financial history, and the compares it to historical data from third-party reporting agencies to determine the customer’s ability to manage and repay debts. The customer data flows to the marketing department, where it is analyzed for trends and compared with opinion content collected from the Internet to make decisions on promotional campaigns.

Eventually, the customer might request a credit-line increase, at which time the customer service system will recommend up-sell opportunities and the lending department uses the customer’s payment history to evaluate the request. Meanwhile, the company’s online transaction processing (OLTP) systems are fielding millions of transaction authorization requests per minute. Real-time analytics systems are looking for anomalies that may indicate fraud by comparing the streams of transaction data to patterns developed by analyzing customers’ purchasing histories. As all this data ages and becomes more static, it shifts to archival systems and is stored using specialized technologies like Apache Hadoop—yet it remains available for instant auditing and long-term trend analysis.

At the same time, the marketing department is investigating a new customer segmentation model to use in an upcoming product launch. Marketing has been busy analyzing their complete customer database to determine online banking trends as well as smart phone and mobile banking adoption rates. After many iterations of their segmentation model, they believe they have identified the data elements and customer behaviors that define a financially sophisticated and technologically savvy customer segment. Now, several months prior to the launch, the product manager is running predictive models to test the business case on combinations of marketing messages and user adoption rates. The team is free to test and retest their assumptions, even though their queries take a long time to execute, because they are running on an analytics-optimized data warehouse appliance—not the primary operational analytics system.

The credit card company is taking advantage of distributed data and a distributed workload architecture. By intelligently separating workloads, it is able to creatively analyze data to identify new business models, test assumptions for new paid services and optimize launch and execution plans without impacting the daily, hourly and up-to-the-minute operational needs of its core business.


Many companies have found success in building data warehouses that meet basic needs, but are now finding they need to move beyond the back-office warehouse to leverage information on the front lines of decision making throughout their entire company. They need information on demand and need the ability to build systems that can deliver on those promises with real incremental returns.

For those who understand the power of an analytics-driven organization, this is a most exciting time. The opportunities are limitless: customers, prospects, suppliers and the business itself are creating endless geysers of data. Analytics tools are inexpensive, widely available and so easy to use that they make business sense in almost any situation.

To move forward, organizations need a strategy that delivers on several focused business requirements:
1) Operational management: Accelerate time-to-market to meet business SLAs for new and existing business processes, operational analytics and business intelligence (BI).
2) Big data: Leverage unstructured data, social media and other “big data” information sources to gain more insights from more data—without impacting the business SLAs.
3) Predictive analytics: Forecast future trends and analyze risks and potential outcomes.

Many IT organizations are adopting a strategy called smart consolidation that reconciles the need to simultaneously distribute data warehousing and analytics capabilities and infrastructure while centralizing management. Smart consolidation is a method for evolving an existing data warehouse architecture to meet today’s demanding analytic needs, such as big data, streaming data and unstructured data.

In a nutshell, it involves thinking beyond the traditional warehouse structures that have provided great success with structured data, basic reporting and analysis. Smart consolidation is driven by these four goals:

  1. Consolidate and govern enterprise data
  2. Optimize workloads for performance and SLAs
  3. Simplify the delivery of analytics by leveraging appliances
  4. Flexibly extend analytic capability as needed

The basis for smart consolidation is to completely optimize an analytics architecture by placing the right workload against the right data, in the right place, at the right cost and the right performance level.

Smart consolidation acknowledges that an organization requires different types of databases, analysis tools and data formats. It needs traditional data warehouses, data warehouse appliances and operational BI systems that can accommodate different types of workloads. It also needs systems based on advanced technologies that can efficiently handle data that is moving extremely quickly as well as large volumes of data that does not change frequently.

Single system? I think not

No single, data system could efficiently serve all these requirements and perform well for both transactional and analytical workloads. Under the smart consolidation strategy, multiple specialized elements use industry standards to communicate and join together to form a fluid, agile data ecosystem that delivers business insight, cross-organizational data governance and centralized IT resource management. By allowing many different elements to serve specialized needs, smart consolidation also enables organizations to accommodate the endless variety and rapidly growing ocean of semi-structured and unstructured data.

Bigger data?


Data will always lead information, always has, always will. Years ago, we created more data than we could analyze and understand at the time. Today, the same. Tomorrow, the same. The amount of data being created will always lead the ability to get information and understanding from it.

“Big data” is a leading edge description of having more data than can be processed into information, analyzed and understood. Many definitions of big data exist, let’s say 100TB or bigger for the sake of arguement. The volume, variety and velocity of data today is certainly accelerating, no question about that. But go back a couple of decades, and we could have made the same statements every year.

Leading companies in the big data space have solutions available today that can tap into an unprecedented amount of data. Petabyte-scale data warehouses, although not pervasive, are nothing new. Assembling the data is one thing, but analyzing it, presenting it and governing it is another. THE leading company has assembled a full “platform” covering the full breadth – operational analytics, deep / advanced analytics, predictive analytics, federated analytics, Hadoop analytics, streaming analytics… complete with end-to-end information governance.

Here is a sampling of big data use cases. Just skim through this and it’s sure to get your creative juices flowing on what CAN be done in your company. http://public.dhe.ibm.com/common/ssi/ecm/en/imc14715usen/IMC14715USEN.PDF 

Once you’ve skimmed through this, come back here and post your comments on 1) how you are currently using big data today, or 2) what you would like to use big data for.

And you know what? Years from now, the amount of data available will still outpace the ability to analyze it. At that time, will we call it “bigger data”?

Follow

Get every new post delivered to your Inbox.

%d bloggers like this: