Report: SMB Cloud Spending to Reach $95B by 2015
Google Snaps Up Channel Intelligence for $125M
Search giant Google turns its attention to e-commerce and product marketing with its acquisition of Channel Intelligence.
Win8 like a first time visit to WalMart
I have been playing with my Win8 Ultrabook for the last week, gradually exploring its features. My friend Louis Columbus tells me Microsoft has bought into “Personas” in its development philosophy simulating how different roles navigate its software. I think they forgot one persona – that of a tourist who enters a US WalMart on [...]
Win8 like a first time visit to WalMart is copyrighted by Vinnie Mirchandani. If you are reading this outside your feed reader, you are likely witnessing illegal content theft.
Enterprise Irregulars is sponsored by Salesforce.com, Workday and Zoho.
Why Is Quantum Computing So Hard?
VoIP and Other Communications Tools
Back in 2008 we started publishing extensively on the notion of Peak Oil and its pending and growing effect on business and life in general. Peak Oil is the theory (and theory trumps hypothesis or just plain opinion) that there is only so much oil in the ground and that you can only extract it [...]
VoIP and Other Communications Tools is copyrighted by Denis Pombriant. If you are reading this outside your feed reader, you are likely witnessing illegal content theft.
Enterprise Irregulars is sponsored by Salesforce.com, Workday and Zoho.
Microsoft Dynamics Gets More Pervasive BI in Latest Upgrade
Microsoft's latest upgrade of its Dynamics ERP software includes added business intelligence features and is designed to help companies respond to rapidly changing business requirements.
Threats in the Cloud Environment SSH Scanners
The February issue of Profit in now available!

But managers are certainly not helpless in the face of headwind in the global talent market. With the right talent management strategies, corporate culture, IT tools, and (of course) people, many organizations can be inoculated against the skills malaise. But it will require long-term vision and early action.
This issue of Profit looks at some of the forces at work in the market and some of the smart, creative efforts Oracle customers are making to address talent issues in their organizations. I hope the stories prove useful as we face the coming skills gap together.
Some highlights from the February 2013 issue of Profit:
Mind the Gap
Getting greater value from millennial and baby boomer employees
Breakthrough Talent
Enterprise systems help employers cope with the global skills shortage.
Chain Reaction
Built on sound IT and standard processes, LinkedIn?s finance department goes from startup to IPO—and beyond.
Giving Up Control
How empowered customers and employees make your business stronger.
Going for the Gold
NBC Sports Group CMO John Miller says social engagement can return unprecedented marketing results.
Innovation at Work
How smart IT solutions are inspiring fresh ideas among the workforce.
Is one issue of Profit not enough to get you through to May? Visit the Profit archives, or follow @OracleProfit on Twitter for a daily dose of enterprise technology news from Profit.
Oracle Releases Oracle’s PeopleSoft PeopleTools 8.53
Istanbul Ticaret University - Production & Research Club Event - 03.January.2013
Istanbul Ticaret University - Production & Research Club Event - 03.January.2013
On 03.January.2013, the technology companies made presentations about their products and sector solutions. We attanded these event and present our application products to university students.
The Surface Pro: My next tab-top?
NEW - Profit Briefing: What We're Thinking About

So, without further ado:
Proposed Acquisition: Oracle Buys Acme Packet
The combination of Oracle and Acme Packet is expected to accelerate the migration to all-IP networks by enabling secure and reliable communications from any device, across any network. Oracle
Oracle Voice: 10 Reasons Why CEOs Don't Understand Their Customers
Oracle Senior Vice President Bob Evans reveals the results of a customer experience study commissioned by Oracle -- and the dangerous implications of its results Forbes.
Trends for 2013: Trends in Employee Training: Online Global Learning Management
"A learning management system is only as good as the learning design, technology, and methodology for delivery that governs it," says Denise Pirrotti Hummel, CEO of Universal Consensus, a cross-cultural management consulting and training firm. Profit Online
Study: Two-thirds of American adults who are online use Facebook.
This, according to a Pew Research study. However, 61 percent of these users say "they have voluntarily taken a break from using Facebook for a period of several weeks or more." Learn the reasons so many say they need a break. Pew Internet
Debate: Is the Internet Making Us More Generous?
Columbia University's first Chief Digital Officer Sree Srennivasan argues that "the Internet has made the world more generous, while also changing our traditional understanding of generosity itself." What do you think? Read his essay and join the discussion. Big Questions Online
Oracle R Enterprise 1.3 gives predictive analytics an in-database performance boost
Recently released Oracle R Enterprise 1.3 adds packages to R that enable even more in-database analytics. These packages provide horizontal, commonly used techniques that are blazingly fast in-database for large data. With Oracle R Enterprise 1.3, Oracle makes R even better and usable in enterprise settings. (You can download ORE 1.3 here and documentation here.)
When it comes to predictive analytics, scoring (predicting outcomes using a data mining model) is often a time critical operation. Scoring can be done online (real-time), e.g., while a customer is browsing a webpage or using a mobile app, where on-the-spot recommendations can be made based on current actions. Scoring can also be done offline (batch), e.g., predict which of your 100 million customers will respond to each of a dozen offers, e.g., where applications leverage results to identify which customers should be targeted with a particular ad campaign or special offer.
In this blog post, we explore where using Oracle R Enterprise pays huge dividends. When working with small data, R can be sufficient, even when pulling data from a database. However, depending on the algorithm, benefits of in-database computation can be seen in a few thousand rows. The time difference with 10s of thousands of rows makes an interactive session more interactive, whereas 100s of thousands of rows becomes a real productivity gain, and on millions (or billions) of rows, becomes a competitive advantage! In addition to performance benefits, ORE integrates R into the database enabling you to leave data in place.
We’ll look at a few proof points across Oracle R Enterprise features, including:
- OREdm – a new package that provides R access to several in-database Oracle Data Mining algorithms (Attribute Importance, Decision Tree, Generalized Linear Models, K-Means, Naïve Bayes, Support Vector Machine).
- OREpredict – a new package that enables scoring models built using select standard R algorithms in the database (glm, negbin, hclust, kmeans, lm, multinom, nnet, rpart).
- Embedded R Execution – an ORE feature that allows running R under database control and boosts real performance of CRAN predictive analytics packages by providing faster access to data than occurs between the database and client, as well as leveraging a more powerful database machine with greater RAM and CPU resources.
OREdm
Pulling data out of a database for any analytical tool impedes interactive data analysis due to access latency, either directly when pulling data out of the database or indirectly via an IT process that involves requesting data to be staged in flat files. Such latencies can quickly become intolerable. On the R front, you’ll also need to consider whether the data will fit in memory. If flat files are involved, consideration needs to be given to how files will be stored, backed up, and secured.
Of course, model building and data scoring execution time is only part of the story. Consider a scenario A, the “build combined script,” where data is extracted from the database, and an R model built and persisted for later use. In the corresponding scenario B, the “score combined script”, data is pulled from the database, a previously built model loaded, data scored, and the scores written to the database. This is a typical scenario for use in, e.g., enterprise dashboards or within an application supporting campaign management or next-best-offer generation. In-database execution provides significant performance benefits, even for relatively small data sets as included below. Readers should be able to reproduce such results at these scales. We’ve also included a Big Data example by replicating the 123.5 million row ONTIME data set to 1 billion rows. Consider the following examples:
Linear Models: We compared R lm and ORE ore.lm in-database algorithm on the combined scripts. On datasets ranging from 500K to 1.5M rows with 3-predictors, in-database analytics showed an average 2x-3x performance improvement for build, and nearly 4x performance improvement for scoring. Notice in Figure 1 that the trend is significantly less for ore.lm than lm, indicating greater scalability for ore.lm.
Figure 1. Overall lm and ore.lm execution time for model building (A) and data scoring (B)
Figure 2 provides a more detailed view comparing data pull and model build time for build detail, followed by data pull, data scoring, and score writing for score detail. For model building, notice that while data pull is a significant part of lm’s total build time, the actual build time is still greater than ore.lm. A similar statement can be made in the case of scoring.
Figure 2. Execution time components for lm and ore.lm (excluding model write and load)
Naïve Bayes from the e1071 package: On 20-predictor datasets ranging from 50k to 150k rows, in-database ore.odmNB improved data scoring performance by a factor of 118x to 418x, while the full scenario B execution time yielded a 13x performance improvement, as depicted in Figure 3B. Using a non-parallel execution of ore.odmNB, we see the cross-over point where ore.odmNB overtakes R, but more importantly, the slope of the trend points to the greater scalability of ORE, as depicted in Figure 3A for the full scenario A execution time.
Figure 3. Overall naiveBayes and ore.odmNB execution time for model building (A) and data scoring (B)
K-Means clustering: Using 6 numeric columns from the ONTIME airline data set ranging from 1 million to 1 billion rows, we compare in-database ore.odmKMeans with R kmeans through embedded R execution with ore.tableApply. At 100 million rows, ore.odmKMeans demonstrates better performance than kmeans , and scalability at 1 billion rows. The performance results depicted in Figure 4 uses a log-log plot. The legend shows the function invoked and corresponding parameters, using subset of ONTIME data set d. While ore.odmKMeans scales linearly with number of rows, R kmeans does not. Further, R kmeans did not complete at 1 billion rows.
Figure 4: K-Means clustering model building on Big Data
OREpredict
With OREpredict, R users can also benefit from in-database scoring of R models. This becomes evident not only when considering the full “round trip” of pulling data from the database, scoring in R, and writing data back to the database, but also for the scoring itself.
Consider an lm model built using a dataset with 4-predictors and 1 million to 5 million rows. Pulling data from the database, scoring, and writing the results back to the database shows a pure R-based approach taking 4x - 9x longer than in-database scoring using ore.predict with that same R model. Notice in Figure 5 that the slope of the trend is dramatically less for ore.predict than predict, indicating greater scalability. When considering the scoring time only, ore.predict was 20x faster than predict in R for 5M rows. In ORE 1.3, ore.predict is recommended and will provide speedup over R for numeric predictors.
Figure 5. Overall lm execution time using R predict vs. ORE ore.predict
For rpart, we see a similar result. On a 20-predictor, 1 million to 5 million row data set, ore.predict resulted in a 6x – 7x faster execution. In Figure 5, we again see that the slope of the trend is dramatically less for ore.predict than predict, indicating greater scalability. When considering the scoring time only, ore.predict was 123x faster than predict in R for 5 million rows.
Figure 6. Overall rpart Execution Time using R predict vs. ORE ore.predict
This scenario is summarized in Figure 7. In the client R engine, we have the ORE packages installed. There, we invoke the pure R-based script, which requires pulling data from the database. We also invoke the ORE-based script that keeps the data in the database.
Figure 7. Summary of OREpredict performance gains
To use a real world data set, we again consider the ONTIME airline data set with 123.5 million rows. We will build lm models with varying number of coefficients derived by converting categorical data to multiple columns. The variable p corresponds to the number of coefficients resulting from the transformed formula and is dependent on the number of distinct values in the column. For example, DAYOFWEEK has 7 values, so with DEPDELAY, p=9. In Figure 7, you see that using an lm model with embedded R for a single row (e.g., one-off or real-time scoring), has much more overhead (as expected given that an R engine is being started) compared to ore.predict, which shows subsecond response time through 40 coefficients at 0.54 seconds, and the 106 coefficients at 1.1 seconds. Here are the formulas describing the columns included in the analysis:
- ARRDELY ~ DEPDELAY (p=2)
- ARRDELY ~ DEPDELAY + DAYOFWEEK (p=8)
- ARRDELY ~ DEPDELAY + DAYOFWEEK + MONTH (p=19)
- ARRDELY ~ DEPDELAY + DAYOFWEEK + MONTH + YEAR (p=40)
- ARRDELY ~ DEPDELAY + DAYOFWEEK + MONTH + YEAR (p=106)
Figure 8. Comparing performance of ore.predict with Embedded R Execution for lm
Compare this with scoring the entire ONTIME table of 123.5 million rows. We see that ore.predict outperforms embedded R until about 80 coefficients, when embedded R becomes the preferred choice. We expect embedded R using predict to perform well at smaller sized data since all operations occur in memory. However, for larger number of coefficients, in-database scoring with ore.predict becomes a clear winner. Note that this reflects scoring only, not writing scores back to the database. If embedded R with R predict also wrote the scores to the database, ore.predict would have the advantage sooner.
Data Movement between R and Database: Embedded R Execution
One advantage of R is its community and CRAN packages. The goal for Oracle R Enterprise with CRAN packages is to enable reuse of these packages while:
- Leveraging the parallelization and efficient data processing capabilities of Oracle Database
- Minimizing data transfer and communication overhead between R and the database
- Leveraging R as a programming language for writing custom analytics
There are three ways in which we’ll explore the performance of pulling data.
1) Using ore.pull at a separate client R engine to pull data from the database
2) Using Embedded R Execution and ore.pull within an embedded R script from a database-spawned R engine
3) Using Embedded R Execution functions for data-parallelism and task-parallelism to pass database data to the embedded R script via function parameter
With ORE Embedded R Execution (ERE), the database delivers data-parallelism and task-parallelism, and reduces data access latency due to optimized data transfers into R. Essentially, R runs under the control of the database. As illustrated in Figure 9, loading data at the database server is 12x faster than loading data from the database to a separate R client. Embedded R Execution also provides a 13x advantage when using ore.pull invoked at the database server within an R closure (function) compared with a separate R client. The data load from database to R client is depicted as 1x – the baseline for comparison with embedded R execution data loading.
Figure 9. Summary of Embedded R Execution data load performance gains
Data transfer rates are displayed in Figure 10, for a table with 11 columns and 5 million to 15 million rows of data. Loading data via ORE embedded R execution using server-side ore.pull or through the framework with, e.g., ore.tableApply (one of the embedded R execution functions) is dramatically faster than a non-local client load via ore.pull. The numbers shown reflect MB/sec data transfer rates, so a bigger bar is better!
Figure 10. Data load and write execution time with 11 columns
While this is impressive, let’s expand our data up to 1 billion rows. To create our 1 billion row data set (1.112 billion rows), we duplicated the 123.5 million row ONTIME dataset 9 times, replacing rows with year 1987 with years 2010 through 2033, and selecting 6 integer columns (YEAR, MONTH, DAYOFMONTH, ARRDELAY, DEPDELAY, DISTANCE) with bitmap index of columns (YEAR, MONTH, DAYOFMONTH). The full data set weighs in at ~53 GB.
In Figure 11, we see linear scalability for loading data into the client R engine. Times range from 2.8 seconds for 1 million rows, to 2700 seconds for 1 billion rows. While your typical user may not need to load 1 billion rows into R memory, this graph demonstrates the feasibility to do so.
Figure 11. Client Load of Data via ore.pull for Big Data
In Figure12, we look at how degree of parallelism (DOP) affects data load times involving ore.rowApply. This test addresses the question of how fast ORE can load 1 billion, e.g., when scoring data. The degree of parallelism corresponds to the number of R engines that are spawned for concurrent execution at the database server. The number of chunks the data is divided into is 1 for a single degree of parallelism, and 10 times the DOP for the remaining tests. For DOP of 160, the data was divided into 1600 chunks, i.e., 160 R engines were spawned, each processing 10 chunks. The graph on the left depicts that execution times improve for the 1 billion row data set through DOP of 160. As expected, at some point, the overhead of spawning additional R engines and partitioning the data outweighs the benefit. At its best time, processing 1 billion rows took 43 seconds.
Figure 12. Client Load of Data via ore.pull for Big Data
In the second graph of Figure 12, we contrast execution time for the “sweet spot” identified in the previous graph with varying number of rows. Using this DOP of 160, with 1600 chunks of data, we see that through 100 million rows, there is very little increase in execution time (between 6.4 and 8.5 seconds in actual time). While 1 billion rows took significantly more, it took only 43 seconds.
We can also consider data write at this scale. In Figure 13, we also depict linear scalability from 1 million through 1 billion rows using the ore.create function to creating database tables from R data. Actual times ranged from 2.6 seconds to roughly 2600 seconds.
Figure 13. Data Write using ore.create for Big Data
ORE supports data-parallelism to enable, e.g., building predictive models in parallel on partitions of the data. Consider a marketing firm that micro-segments customers and builds predictive models on each segment. ORE embedded R execution automatically partitions the data, spawns R engines according to the degree of parallelism specified, and executes the specified user R function. To address how efficiently ore.groupApply can process data, Figure 14 shows the total execution time to process the 123.5M rows from the ONTIME data with varying number of columns. The figure shows that ore.groupApply scales linearly as the number of columns increases. Three columns were selected based on their number of distinct values: TAILNUM 12861, DEST 352, and UNIQUECARRIER 29. For UNIQUECARRIER, all columns (total of 29 columns) could not be completed since 29 categories resulted in data too large for a single R engine.
Figure 14. Processing time for 123.5M rows via ore.groupApply
ORE also supports row-parallelism, where the same embedded R function can be invoked on chunks of rows. As with ore.groupApply, depending on the specified degree of parallelism, a different chunk of rows will be submitted to a dynamically spawned database server-side R engine. Figure 15 depicts a near linear execution time to process the 123.5M rows from ONTIME with varying number of columns. The chunk size can be specified, however, testing 3 chunk sizes (10k, 50k, and 100k rows) showed no significant difference in overall execution time, hence a single line is graphed.
Figure 15. Processing time for 123.5M rows via ore.rowApply for chunk sizes 10k-100k
All tests were performed on an Exadata X3-8. Except as noted, the client R session and database were actually on the same machine, so network latency for data read and write were minimum. Over a LAN or WAN, the benefits of in-database execution and ORE will be even more dramatic.
DARPA Funds Python Big Data Effort
Webinar: Building a highly scaleable distributed row, document or column store with MySQL and Shard-Query
On Friday, February 15, 2013 10:00am Pacific Standard Time, I will be delivering a webinar entitled “Building a highly scaleable distributed row, document or column store with MySQL and Shard-Query” The first part of this webinar will focus on why distributed databases are needed, and on the techniques employed by Shard-Query to implement a distributed [...]
The post Webinar: Building a highly scaleable distributed row, document or column store with MySQL and Shard-Query appeared first on MySQL Performance Blog.
ODI - Extracting data from PDF Forms in 0 to 60
Here's an end to end viewlet illustrating how to reverse engineer form metadata from PDFs and also how to extract and integrate data. This carries on from the earlier posting here.