Error installing Node of SQL Server 2008R2 Cluster

For Spanish version visit this link Error instalando Nodo de Cluster SQL Server 2008 R2
 
This week, we are migrating the Production servers to SQL Server 2008 R2 SP1. During the installation of a Node of SQL Server Cluster, the following error occured:
 
 
Error reading from file: X:\PATHFOLDER\x64\setup\sql_engine_core_inst_msi\PFiles\SqlServr\MSSQL.X\MSSQL\Binn\Template\master.mdf
Verify that the file exists and that you can access it.
 
After doing several tests, we saw that we couldn’t copy files with .mdf, .ndf, .ldf and .bak extensions to C:\ drive, we start to check the users permissions, policies, file permissions and it not worked, so after 3 hours we decided to stop. The next day a co-worker said that could be the antivirus, we disabled it and the installation works like a charm.
 
This is an example of how an small unexpected failure calculation can make you to pass a hard time. One thing is sure, from now on at the minimum strange error, the first that I will probe is to disable the antivirus.
 

What I learned in the last year? Resources and websites that help you get started with SQL Server

For Spanish version visit this link ¿Qué he aprendido en el último año? Recursos y webs que te ayudarán a comenzar con SQL Server
 
Within days, June 20th, will be one year since I joined Avanade Spain, so I think that is a good moment to stop and think over all learned in the last 366 days.
 
I can safely say that this was the year that I’ve grown more as SQL Server professional, When I started at company, didn’t know practicaly how a cluster works; the replication seemed a world apart; I wasn’t familiar with SQL Scripting; and my English level was much worse (you can see how badly I write, but before was even worse).
 
I got my new knowledges in a big part thanks to my father, my co-workers, Google, MSDN forums, and blogs that saved my life a lot of times, I going to enumerate some of this:
  Read more of this post

How to create an SQL Server 2012 installer with integrated CU1 (Product Updates – Slipstream)

For Spanish version visit this link Cómo crear un instalador de SQL Server 2012 con CU1 integrado (Product Updates – Slipstream)
 
Time ago, I created an automated batch procedure to create an SQL Server 2008 R2 Installer with Integrated SP1, known as Slipstream, now in SQL Server 2012 this was replaced with a new funcionality called Product Updates.
 
With the Product Updates funcionality we can integrate Service Pack or Cumulative Updates to the SQL Server installer, this save much time, specially when you need to do multiple installations.
 
Read more of this post

Resurrect SQL Server Agent after repair an instance

For Spanish version visit this link Resucitar Agente SQL Server después de reparar una instancia
 
After a failed update from SQL Server 2005 to SQL Server 2008 R2 SP1, I decided to use the Repair option of the installer, the SQL Server resource was repaired correctly, but the SQL Server Agent couldn’t start.
 
 
Read more of this post

Script: Reduce Log File depending on DB Data Files size

For Spanish version visit this link Script: Reducir Log porcentualmente dependiendo del tamaño de los ficheros de Datos
 
A friend asked me about a dynamic script to reduce the DB Log File based on the total size of DB Data Files, here is the answer:
/*--------------------------------------------------------------------------------------
-- File: ShrinkToPercent.sql
-- Author: Fran Lens (http:\\www.lensql.net)
-- Date: 2012-03-27
-- Description: Reduce LogFile Size based on percentage of the Total Size of DataFiles
--------------------------------------------------------------------------------------*/
DECLARE @SelectDB varchar(50)
DECLARE @ShrinkPercent float
DECLARE @DBid int
DECLARE @RecoverySimple varchar(200)
DECLARE @RecoveryFull varchar(200)
DECLARE @ShrinkCommand nvarchar(200)
DECLARE @ShrinkFile varchar(50)
DECLARE @ShrinkValue varchar(50)

SET @SelectDB = 'AdventureWorks2008R2' -- Database whose Log will be Reduced
SET @ShrinkPercent = 30 -- Percentage of the DataFiles Size to Reduce the LogFile
						-- Example: With a value of 20 Percent and 1GB of Datafiles Size, the Log will be reduced to 200MB

SET @DbId= (select database_id from sys.databases where name=@SelectDB)
SET @RecoverySimple = 'ALTER DATABASE [' + @SelectDB + '] SET RECOVERY SIMPLE WITH NO_WAIT' -- Change the recovery model to Simple
SET @ShrinkFile = (SELECT name from sys.master_files WHERE database_id = @DBid and type_desc = 'LOG')
SET @ShrinkValue = (@ShrinkPercent)/100 * (SELECT SUM(size)/128 FROM sys.master_files WHERE database_id = @DBid and type_desc = 'ROWS')
SET @ShrinkCommand = 'USE [' + @SelectDB + ']' + CHAR(13)+ 'DBCC SHRINKFILE('+ @ShrinkFile+',' + @ShrinkValue + ')' -- Reduce the LogFile Size
SET @RecoveryFull = 'ALTER DATABASE [' + @SelectDB + '] SET RECOVERY FULL WITH NO_WAIT' -- Change the recovery model to Full

EXEC (@RecoverySimple)
EXEC sp_executesql @ShrinkCommand
EXEC (@RecoveryFull)

Scripts: Jobs Not Executed in the Last Year / Job Activity Details

For Spanish version visit this link Scripts: Jobs no ejecutados en el último año / Actividad de Jobs Detallada
 
A few days ago i was looking for a script to let me know when was the last execution of a job, so I can review the jobs that were not executed in the last year and delete the ones not needed, but i couldn’t find anything valid, so i had to create one.
 
 I use the columns last_run_date and last_run_time from sysjobservers table to get this value, the scripts that i found on Internet used other columns from other tables which i will describe why i not chose:
 
Read more of this post

The merge process was unable to create a new generation at the ‘Publisher’

For Spanish version visit this link El proceso de mezcla no pudo crear una nueva generación en ‘Publisher’
 
Yesterday we did an massive update of 1 million of registers in a table of the replication DB, after the update , we started the replication of one computer, and we saw a failure in the first publication after 2000 seconds, showing the next error:
 
“The merge process was unable to create a new generation at the ‘Publisher’. Troubleshoot by restarting the synchronization with verbose history logging and specify an output file to which to write.”
 
Read more of this post