Quantcast
Channel: Alltop RSS feed for enterprise.alltop.com
Viewing all 58520 articles
Browse latest View live

First DAX Advanced Workshop in London, May 2013 #dax #tabular #ssas

$
0
0

Are you working with SSAS Tabular? Are you an experienced PowerPivot user? In both cases, you should be aware that there is only one skill that is important for PowerPivot and SSAS Tabular, and it is the DAX language. I and Alberto have been using DAX since 2010, wrote several books containing several chapters about DAX and we know that there is still much to do. We have plans to publish more content online (more on this in a few months…) but we realized that the number of companies building tabular models is increasing every day. The common issues we see are about design, calculation, queries and performance. All of them are related to DAX, and we understand that learning DAX requires mentoring and practice (if only we had that 3 years ago…).

Well, the good news is that now you can learn DAX deeper and faster. We created a new intensive DAX course that we called DAX Advanced Workshop. It is a three-day classroom that is aimed to Advanced PowerPivot users and Analysis Services developers that want to master the DAX language and improve their skills in performance optimization. The course includes hands-on lab sessions assisted by the trainer (me or Alberto), including exercises for creating queries, solving business problems and locating performance bottlenecks in DAX.

Prerequisite: Attendees need to have a basic knowledge of the SQL 2012 Analysis Services Tabular modeling or they need to be familiar with PowerPivot for Excel and have produced at least some basic reports. A prerequisite of the course is the participation to a SSAS Tabular or PowerPivot Workshop, or having equivalent experience.

If you think you’re ready for that, we have a single date in Europe before summer, and it will be in London on May 13-15, 2013. You can download course outline and register here. Seats are limited, hands-on-labs requires real assistance. You have to bring your laptop for hands-on-labs. It will be funny, but it will be tough!

We don’t have plans for other editions until next fall, so if you are interested, free your agenda. Unless you want an on-site edition in another date, of course.

Please, let me know if you are interested in US. You might have a good excuse to visit London, but if this is not enough, then give me your feedback. We will evaluate demand from US in order to schedule other public classes.


My guiding Magic Quadrant

$
0
0
I often get nice compliments like “where did that insight come from?” or “how could you rattle off so many case studies around that technology trend?” In reverse, I find myself disagreeing with fellow bloggers and analysts and when I...

The Action Can’t Be Completed Because the File is Open in NodeRunner.exe

$
0
0
On an Exchange Server 2013 server that is installed with the Mailbox server role, when you attempt to delete the files and folders for a database or database copy that has been removed, you may receive an error.

In the data center or information factory, not everything is the same

$
0
0
Sometimes what should be understood, or that is common sense or that you think everybody should know needs to be stated. After all, there could be somebody who does not know what some assume as common sense or what others know for various reasons. At times, there is simply the need to restate or have

Possibility Of Green Data Center REIT

$
0
0
The near future may see the advent of a green data center REIT. Today, the renewable energy industry is looking at taking the help of specific investment structures like those used by real estate, oil and gas industries. This oculd be in lieu of the specific tax breaks. REIT or real estate investment trust could

Over $3M in Prizes to Hack Google Chrome

$
0
0

Google's contest at the CanSecWest conference:

Today we’re announcing our third Pwnium competition­Pwnium 3. Google Chrome is already featured in the Pwn2Own competition this year, so Pwnium 3 will have a new focus: Chrome OS.

We’ll issue Pwnium 3 rewards for Chrome OS at the following levels, up to a total of $3.14159 million USD:

  • $110,000: browser or system level compromise in guest mode or as a logged-in user, delivered via a web page.
  • $150,000: compromise with device persistence -- guest to guest with interim reboot, delivered via a web page.

We believe these larger rewards reflect the additional challenge involved with tackling the security defenses of Chrome OS, compared to traditional operating systems.

News article.

Using a Non-Exchange Server as an Exchange 2013 DAG File Share Witness

$
0
0
When creating an Exchange Server 2013 DAG you may receive an error that the Exchange Trusted Subsystem is not a member of the local Administrators group on specified witness server.

On Video: Demo Of The BlackBerry Q10


CA Technologies Announces Centre Of Excellence

$
0
0

Filed under: News : Education

In partnership with Chitkara University in Punjab and Himachal Pradesh,the centre targets undergraduates and will embed CA curricula in IT curriculum. Read more



Change Management in the Heavy Duty Equipment Industry

$
0
0
Over the last year we have been teaming with a $300 million heavy duty equipment manufacturer. The company manufactures and distributes products that are used in the transportation and logistics market.  Profit margins in this industry have been under significant pressure since 2009. The heavy duty equipment industry is just coming out of a recession. [...]

Obtaining rowcounts when using Composable DML [T-SQL]

$
0
0

In my August 2009 blog post Exploring Composable DML I introduced a new feature in SQL Server 2008 called Composable DML and also outlined one of its limitations; namely that data from the OUTPUT cannot be aggregated prior to insertion. Composable DML does have some useful scenarios however and one of those is in capturing and storing values that are replaced by an UPDATE (which I have talked about before in Using Composable DML to maintain entity history). Here’s the basic premise:

INSERT old_values( id, name ) --use Composable DML to store the values that were replaced by an UPDATE
SELECT	mrgout.deleted_id
,		mrgout.deleted_name
FROM	(
		MERGE	tgt
		USING	src
			ON	tgt.id = src.id
		WHEN MATCHED THEN  UPDATE	
			SET	tgt.NAME = src.NAME   WHEN NOT MATCHED THEN   INSERT(id,name)
			VALUES(src.id,src.name)
		OUTPUT $ACTION AS action_
		,		INSERTED.id AS inserted_id
		,		INSERTED.NAME AS inserted_name
		,		DELETED.id AS deleted_id
		,		DELETED.name AS deleted_name
		)mrgout
WHERE	mrgout.action_ = 'UPDATE' --Filtering on $action=UPDATE allows us to get the replaced values from DELETED virtual table
;

This statement updates some rows in [tgt] and stores the old values in [old_values]. I think that’s rather useful, especially in a data warehousing scenario where one may wish to MERGE to a type 1 dimension table. Unfortunately this scenario gives rise to another limitation of Composable DML – the value returned by @@ROWCOUNT is the number of rows that were affected in [old_values], not in [tgt]. The following code (which you can simply copy/paste and execute and which is also available on pastebin) demonstrates this problem:

/******************************************************************************************************************************
A demonstration of capturing rowcounts when using composable DML. The problem I'm trying to demonstrate here is that 
I don't think there is a way to capture the number of rows affected by the MERGE
 
Jamie Thomson, 2013-02-07
******************************************************************************************************************************/
 
/*Setup table first and insert some data into [src]*/
USE tempdb
IF OBJECT_ID('src') IS NOT NULL		DROP TABLE src;
CREATE TABLE src (
	id		INT
,	name    NVARCHAR(MAX)
);
IF OBJECT_ID('tgt') IS NOT NULL		DROP TABLE tgt;
CREATE TABLE tgt (
	id		INT
,	name    NVARCHAR(MAX)
);
/*[updates] will be used as the target of the Composable DML insertion*/
IF OBJECT_ID('old_values') IS NOT NULL		DROP TABLE old_values;
CREATE TABLE old_values (
	id		INT
,	name    NVARCHAR(MAX)
);
INSERT src(id,name)VALUES(1,'don'),(2,'kaina');
GO
 
/*Everything after here gets run twice because the batch ends with GO 2*/
INSERT old_values( id, name ) --use Composable DML to store the values that were replaced by an UPDATE
SELECT	mrgout.deleted_id
,		mrgout.deleted_name
FROM	(
		MERGE	tgt
		USING	src
			ON	tgt.id = src.id
		WHEN MATCHED THEN  UPDATE	
			SET	tgt.NAME = src.NAME   WHEN NOT MATCHED THEN   INSERT(id,name)
			VALUES(src.id,src.name)
		OUTPUT $ACTION AS action_
		,		INSERTED.id AS inserted_id
		,		INSERTED.NAME AS inserted_name
		,		DELETED.id AS deleted_id
		,		DELETED.name AS deleted_name
		)mrgout
WHERE	mrgout.action_ = 'UPDATE' --Filtering on $action=UPDATE allows us to get the replaced values from DELETED virtual table
;
SELECT	[@@ROWCOUNT]=@@ROWCOUNT,row_tally_in_tgt=(SELECT COUNT(*) FROM tgt)  -- <-Rowcount only provides tally of rows affected by the outer INSERT, not the MERGE
GO 2

Here is the output:

image

Notice that the Composable DML containing the MERGE statement is executed twice. The first execution inserts two rows into [tgt] yet @@ROWCOUNT returns zero because zero rows were inserted into [old_values] by the outer query. The second execution results in two rows in [tgt] being updated hence two rows are inserted into [old_values] and hence why @@ROWCOUNT returns two. It appears there is no way to discover the number of inserts or updates that were committed by the MERGE; if you’re a fan of logging rowcounts during ETL operations (which I think you should be) then this is a big problem. The only way I can think of getting around this problem is to break the statement into two like so (for brevity I haven’t included the full code listing so it is also available on pastebin):

/*Setup part is the same as before, we do need an extra table for capturing the output of our MERGE tho*/
IF OBJECT_ID('mrgout') IS NOT NULL		DROP TABLE mrgout;
CREATE TABLE mrgout (
	action_			NVARCHAR(MAX)
,	inserted_id		INT
,	inserted_name    NVARCHAR(MAX)
,	deleted_id		INT
,	deleted_name    NVARCHAR(MAX)
);
INSERT src(id,name)VALUES(1,'don'),(2,'kaina');
GO
 
/*Everything after here gets run twice because the batch ends with GO 2*/
TRUNCATE TABLE mrgout;
INSERT mrgout(action_, inserted_id, inserted_name,deleted_id, deleted_name )
SELECT	mrgout.action_
,		mrgout.inserted_id
,		mrgout.inserted_name
,		mrgout.deleted_id
,		mrgout.deleted_name
FROM	(
		MERGE	tgt
		USING	src
			ON	tgt.id = src.id
		WHEN MATCHED THEN  UPDATE	
			SET	tgt.NAME = src.NAME   WHEN NOT MATCHED THEN   INSERT(id,name)
			VALUES(src.id,src.name)
		OUTPUT $ACTION AS action_
		,		INSERTED.id AS inserted_id
		,		INSERTED.NAME AS inserted_name
		,		DELETED.id AS deleted_id
		,		DELETED.name AS deleted_name
		)mrgout
;
INSERT	dbo.old_values(id,name)
SELECT	deleted_id,deleted_name FROM mrgout
WHERE	mrgout.action_ = 'UPDATE' --Filtering on $action=UPDATE allows us to get the replaced values from DELETED virtual table
SELECT	[INSERT_@@ROWCOUNT]=(SELECT COUNT(*) FROM mrgout WHERE action_ = 'INSERT'),[UPDATE_@@ROWCOUNT]=(SELECT COUNT(*) FROM mrgout WHERE action_ = 'UPDATE'),row_tally_in_tgt=(SELECT COUNT(*) FROM tgt)   GO 2

Executing that returns:

image

This is much better. We now know the tally of insertions and updates committed by the MERGE, unfortunately we have had to do it in two separate statements which in a way defeats the point of using MERGE in the first place (and don’t forget some of the other current problems with MERGE). If you can think of a better way of doing it then I’m all ears – please reply in the comments below.

I’m not saying don’t use MERGE and I’m not saying don’t use Composable DML; just be aware of their limitations. Personally I think there should be built-in functions, similar to @@ROWCOUNT, that return the number of rows INSERTed/DELETEd/UPDATEd by a MERGE; Aaron Bertrand agreed and raised a Connect submission to that affect: Katmai : Merge does not distinguish rowcounts in triggers which has, unfortunately, “been closed as won’t fix”.

@Jamiet

Gartner Magic Quadrant Spots Data Warehouse Leaders

$
0
0
Gartner's annual report notes a wave of data warehousing newcomers and leading-edge companies taking advantage of big data.

InformationWeek's RSS Feed is brought to you by

Improvement methods – is there a best?

Constraints help creativity and productivity


Sanovi Technologies Unveils New Release Of Its DR Solution Suite

BI, Big Data Highest IT Spend Priorities For 60% CIOs

Tata Communications Extends Managed Services Offering For MNOs

Microserver Market Not So Micro

$
0
0
As companies pursue greater efficiency in their data centers, particularly with regard to lighter workloads such as those resulting from the burgeoning mobile market, the microserver is stealing the spotlight from the traditional server form factor. Microservers are set for explosive growth in coming years, so here’s a brief look at what you can expect.

Apple Is World's Largest PC Maker--If You Count iPads as PCs

$
0
0
Report says 1 out of 6 computing devices sold last year was an iPad.
Viewing all 58520 articles
Browse latest View live