Wednesday, October 26, 2016

DISK FULL cannot SHRNIk mdf

I would avoid shrinking if possible. It will cause fragmentation issues with any indexes you have and thus performance issues. It can also cause fragmentation in the file system.
If your in a crunch, add a second datafile in the same filegroup to a different volume. SQL Server will start putting new data in that new file. It will do this until the files are of equal size and then basically round robin going forward.
This should allow you to focus on averting a full drive until you can get a bigger disk.

Friday, September 23, 2016

JOB to run last day minus 1 day of the month



IF OBJECT_ID('tempdb.dbo.#CalendarView', 'U') IS NOT NULL
  DROP TABLE #CalendarView;
  go

 Create Table #CalendarView(CalendarDate DATE);
 GO

  DECLARE @StartDate DATE = '20160101', @NumberOfYears INT = 30;
  DECLARE @CutoffDate DATE = DATEADD(YEAR, @NumberOfYears, @StartDate);
 INSERT INTO #CalendarView(CalendarDate)
  SELECT d
FROM
(
  SELECT d = DATEADD(DAY, rn - 1, @StartDate)
  FROM
  (
    SELECT TOP (DATEDIFF(DAY, @StartDate, @CutoffDate))
      rn = ROW_NUMBER() OVER (ORDER BY s1.[object_id])
    FROM sys.all_objects AS s1
    CROSS JOIN sys.all_objects AS s2
    ORDER BY s1.[object_id]
  ) AS x
) AS y;
SELECT * from #CalendarView
go



--Create a job and keep it not with schedule.Now, execute your job in below query:
DECLARE @Today DATE = GETDATE();
DECLARE @FirstDayNextMonth DATE = DATEADD(MONTH, 1,
                                          DATEADD(DAY, 1 - DAY(@Today), @Today));
DECLARE @LastBut1DayThisMonth DATE= DATEADD(DD, -2,
                                            DATEADD(MM,
                                                    DATEDIFF(MM, 0,
                                                             CONVERT(VARCHAR, GETDATE(), 101))
                                                    + 1, 0));

--PRINT @Today;
--PRINT @FirstDayNextMonth;
--PRINT @LastBut1DayThisMonth;
SELECT  MAX(CalendarDate)
FROM    #CalendarView
WHERE   CalendarDate = @LastBut1DayThisMonth;
IF @LastBut1DayThisMonth = @Today
    BEGIN
        EXEC msdb.dbo.sp_start_job 'BACKUPTEST';--I have to create a Job which kicks SSIS package with no schedule
    END;

  

Audit Table SP Create alter drop Statements

CREATE TABLE AuditTableChanges (EventTime DATETIME, DatabaseName Varchar(100), EventName Varchar(1000), ObjectName Varchar(100), TSQLStatement Varchar(max), LoginName Varchar(100))
go
---CREATE_PROC, ALTER_PROC, DROP_PROC,

CREATE TRIGGER AuditTableChanges
ON DATABASE
FOR  CREATE_TABLE, ALTER_TABLE, DROP_TABLE
AS

DECLARE @ed XML
SET @ed = EVENTDATA()

INSERT INTO AuditTableChanges (EventTime, DatabaseName, EventName, ObjectName, TSQLStatement, LoginName)
VALUES
(
GetDate(),
@ed.value('(/EVENT_INSTANCE/DatabaseName)[1]', 'varchar(256)'),
@ed.value('(/EVENT_INSTANCE/EventType)[1]',' nvarchar(100)'),
@ed.value('(/EVENT_INSTANCE/ObjectName)[1]', 'varchar(256)'),
@ed.value('(/EVENT_INSTANCE/TSQLCommand)[1]', 'nvarchar(2000)'),
@ed.value('(/EVENT_INSTANCE/LoginName)[1]', 'varchar(256)')
) ;


SELECT * from AuditTableChanges


to send email notifications:
USE Db
GO

/****** Object:  DdlTrigger [TableAltered]    Script Date: 9/23/2016 1:29:35 PM ******/
SET ANSI_NULLS ON
GO

SET QUOTED_IDENTIFIER ON
GO






CREATE TRIGGER [TableAltered]
ON DATABASE
FOR  ALTER_TABLE

AS

DECLARE @bd VARCHAR(MAX)
DECLARE @tsql VARCHAR(MAX)
SET @tsql = EVENTDATA().value
        ('(/EVENT_INSTANCE/TSQLCommand/CommandText)[1]','varchar(max)')
SET @bd = 'UserName: ' + UPPER(SUSER_NAME()) + '

  ServerName: ' + @@SERVERNAME + '

  Time: ' + CONVERT(VARCHAR(25),GETDATE()) + '

  HostName: ' + HOST_NAME() + '

  Database: ' + DB_NAME() + '

  T-SQL: ' +  @tsql


BEGIN
PRINT 'Table has been Altered'

EXEC msdb.dbo.sp_send_dbmail @profile_name = 'DBMail',
@recipients = 'abc@xyz.com',
@subject = 'A new Table has been created!',
@body_format = 'HTML',
@importance = 'High',
@body = @bd
END









GO

ENABLE TRIGGER [TableAltered] ON DATABASE
GO


Thursday, September 1, 2016

Finding Implicit Column Conversions in the Plan Cache

 http://sqlblog.com/blogs/jonathan_kehayias/archive/2010/01/08/finding-implicit-column-conversions-in-the-plan-cache.aspx



http://www.sqlconsulting.com/news1210.htm

Good Article on Full Text Search , Change tracking on full textsearch

http://sql-articles.com/articles/dba/sql-server-2005-full-text-search/




Populating full text index:
Creating and maintaining a full-text index involves populating the index by using a process called a population (also known as a crawl). SQL Server supports the following types of population: full population, change tracking-based automatic or manual population, and incremental timestamp-based population.
Full population:
Firstly, whenever we create a full text index there will be a full population of the index for all the rows. To avoid it we need to use the option CHANGE_TRACKING OFF, NO POPULATION. Those options will turn off change tracking and will prevent populating the full text index while FTX creation. Since the Full population consumes a lot of resources we need to do them at offproduction hours. During a full population, index entries are built for all the rows of a table or indexed view. A full population of a full-text index builds index entries for all the rows of the base table or indexed view.
Change tracking based population: There are 2 types
a.) Using Automatic whenever an insert/update or delete happens to the full text index there will be a population of those rows alone (automatically)
b.) Using Manual whenever DML operations happen it will not get propagated automatically to the FTX. Hence we need to use automatic population so that whenever a change happens in the text field it will immediately be available in the full text index i.e. it will be populated instantaneously. If we use manual population we need to run the command using a job to populate the changes alone manually.
The overhead involved here is the SQL Server will maintain a table to track the list of tables and rows modified.
Incremental population: We need to have a timestamp column in the table in order to make use of Incremental population. This will populate only the rows that have modified since the previous incremental population. This method is highly suitable for tables which change frequently.
The time delay for the full text index data to be available while searching after getting populated is close to 1 to 2 minutes maximum. Without getting populated it will not be available.

Thursday, August 25, 2016

Performance issues with Composite Primary keys Index fragmentation

Say you have a table with 10 columns and 3 columns (id, name, phone) as composite keys
when you have lot of updates on Name, Phone ...the index on composite key get fragmented a lot.
better
try to keep ID Primary key and have clustered index
or
keep id name then have composite key

and think of option whether to keep a constraint on Phone

Note:
you should first see how the index is getting fragmented and do some analysis , test thoroughly so that application won't break and then implement in Prod

Thanks 

REBUILD vs RE ORGANIZE INDEX = OPTION ONLINE =ON?

Just a quick note,.

 RE ORGANIZE INDEX with OPTION ONLINE =ON - is not supported
where as we can do REBUILD INDEX

Wednesday, August 24, 2016

Issues with SQL Database Log Backups

 I have 1 Production Server X, database Auditdb, Enterprise edition, 2014
Full backups run 12am every night(NO Differential)
T log run every 10m minutes, Today someone updated a big table without where clause and i had to restore that data, so I tried do in our UAT Server first and then do a compare or update manually...
so, in UAT when I tried to restore: 12:01:39am Full backup with NO RECOVERY --good
then started to restore all T log backups  starting from 12:10:11am, bases on scripts attached , i was able to to restore with 2 t logs with NO RECOVERY ,but failed on 3 rd t log file..
  i have attached  the error which i got , i tried to look at LSN , file names based on date and did lot of research but no luck,
can someone please take a look at it let me know what im i doing wrong?
and please provide any working script  to restore T logs in order ? 
how can i know the next T log file to restore?after Full, tlog1, tlog2 like that?

you know what i found: 
We have Alwayson Availability groups : Primary replica and  Secondary replica and i have FULL(12am daily)and Log backup(every 10mins) jobs running On primary and secondary replica after checking IF PRIMARY THEN BACKUP using maintenance plan, ELSE EXIT...
So something is happening at 12:31AM everyday on secondary ,  T log backup is being taken using NT Authority/System and also the name of the File is NULL . I looked at  all jobs  to see if there are any T-logs  but found no jobs running. I am going to run a trace to find what NT Authority/System  is doing between 12-13:45 am and also i disabled T-log backup job which i mentioned above on secondary.
Note:
Primary was always primary and no failover happened 
We have Veeam backups which kick start at 1am which does file copy not SQL style backups
No Scheduled Jobs in OS or SQL which could kick start T-log other than mine.
this issue is happening Everyday at same time on secondary replica for all databases 
    
SELECT   top 10 
        [b].[backup_start_date] ,
        [b].[backup_finish_date] ,
        [b].[type] ,
        [b].[first_lsn] ,
        [b].[last_lsn] 
FROM    [msdb].[dbo].[backupset] AS [b] 
     where[b].[backup_start_date] >= '2016-08-23 00:01:00' and database_name like '%Auditdb%'
ORDER BY [b].[backup_start_date];
 backup_start_date backup_finish_date type first_lsn last_lsn
2016-08-23 00:01:18.000 2016-08-23 00:05:06.000 D 13983000002259000001 13983000002262100001
2016-08-23 00:10:07.000 2016-08-23 00:10:07.000 L 13983000002259000001 13983000002264600001
2016-08-23 00:20:14.000 2016-08-23 00:20:14.000 L 13983000002264600001 13983000002265500001
2016-08-23 00:30:05.000 2016-08-23 00:30:05.000 L 13983000002265500001 13983000002266400001
2016-08-23 00:40:05.000 2016-08-23 00:40:05.000 L 13983000002267000001 13983000002268000001
2016-08-23 00:50:06.000 2016-08-23 00:50:06.000 L 13983000002268000001 13983000002268900001
2016-08-23 01:00:06.000 2016-08-23 01:00:06.000 L 13983000002268900001 13983000002287100001
2016-08-23 01:05:05.000 2016-08-23 01:05:05.000 L 13983000002287100001 13983000002290400001
2016-08-23 01:10:05.000 2016-08-23 01:10:05.000 L 13983000002290400001 13983000002291300001
2016-08-23 01:20:05.000 2016-08-23 01:20:05.000 L 13983000002291300001 13983000002292200001

 SECONDARY:
backup_start_date backup_finish_date type first_lsn last_lsn
2016-08-23 00:33:24.000 2016-08-23 00:33:24.000 L 13983000002266400001 13983000002267000001





THEN ::::::::


I found why LSN Chain in SQL Server Database backups is breaking on FPMC at 12:30am and on FPVI at 1am Daily.

On FPMC: 12:30am
I looked at SQL Profiler Trace data which is happening with Login name “NT AUTHORITY\SYSTEM” and  found Veeam Endpoint Backup is running  
       DECLARE @database_name_var NVARCHAR(255) SET @database_name_var = N'AuditDB' BACKUP LOG @database_name_var TO DISK = 'NUL' .
This is happening against all databases, and TRAUNCATING LOG , Breaking LSN (Backup to Nul is TRASH- No location), and should be avoided Immediately in order to have a Point in time recovery.

As per my googling  I found that Veeam does by default backup the log to the filename "nul" before it takes the VSS snapshot
We need to make some modification to Veeam Endpoint Backup and remove the step which is doing back log to nul which we don’t need,  and same thing happening on FPVI-SQL1HA at 1am


Fyi: Veeam Endpoint Backup is Scheduled to run Backups on FPMC at 12:30am and on FPVI at 1am Daily.



I added a (32 bit) DWORD – VSSForceOnlyCopy=1 to the following Registry location on both SQL HA Nodes. – HKLM\SOFTWARE\VEEAM\VEAM ENDPOINT BACKUP

From what I read, that should fix the issue. 





Monday, August 22, 2016

RESTORE MULTIPLE LOG backup files

DECLARE @databaseName sysname
DECLARE @backupStartDate datetime
DECLARE @backup_set_id_start INT
DECLARE @backup_set_id_end INT

-- set database to be used
SET @databaseName = 'databasename'

SELECT @backup_set_id_start = MAX(backup_set_id)
FROM msdb.dbo.backupset
WHERE database_name = @databaseName AND type = 'D'

SELECT @backup_set_id_end = MIN(backup_set_id)
FROM msdb.dbo.backupset
WHERE database_name = @databaseName AND type = 'D'
AND backup_set_id > @backup_set_id_start

IF @backup_set_id_end IS NULL SET @backup_set_id_end = 999999999

SELECT backup_set_id, 'RESTORE DATABASE ' + @databaseName + ' FROM DISK = '''
+ mf.physical_device_name + ''' WITH NORECOVERY'
FROM msdb.dbo.backupset b,
msdb.dbo.backupmediafamily mf
WHERE b.media_set_id = mf.media_set_id
AND b.database_name = @databaseName
AND b.backup_set_id = @backup_set_id_start
UNION
SELECT backup_set_id, 'RESTORE LOG ' + @databaseName + ' FROM DISK = '''
+ mf.physical_device_name + ''' WITH NORECOVERY'
FROM msdb.dbo.backupset b,
msdb.dbo.backupmediafamily mf
WHERE b.media_set_id = mf.media_set_id
AND b.database_name = @databaseName
AND b.backup_set_id >= @backup_set_id_start AND b.backup_set_id < @backup_set_id_end
AND b.type = 'L'
UNION
SELECT 999999999 AS backup_set_id, 'RESTORE DATABASE ' + @databaseName + ' WITH RECOVERY'
ORDER BY backup_set_id

Friday, August 19, 2016

ENABLE INDEX AFTER DISABLED

ALTER INDEX uci_MSpeer_originatorid_history on MSpeer_originatorid_history REBUILD

How to implement Encryption in Sensitive data column

USE master
GO
CREATE DATABASE CellLevelEncryptionDemo
GO
Use   CellLevelEncryptionDemo
GO
--create a database master key (DMK) for the master database
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'F#4*KN^7$';
GO

--create a certificate for use as the database encryption key   (DEK) protector and is protected by the DMK.
CREATE CERTIFICATE CertificateforColumns
WITH SUBJECT = 'Certificate for Column Level encryption';
GO


--Create a symmetric key and encrypt it by using above created   certificate
CREATE SYMMETRIC KEY   EncryptColumn
     WITH ALGORITHM   = AES_256 --Supported encryption algorithms are AES with 128-bit,   192?bit, or 256?bit keys or 3 Key Triple DES
     ENCRYPTION BY   CERTIFICATE CertificateforColumns;
GO


--To decrypt the encrypted data you need to use another inbuilt   function called DecryptByKey
--When you try encrypting or decrypting data without opening the symmetric key, the command will not fail but also, it will not work; for example the command given below will not fail as symmetric is not open but it will return NULLs instead of data:
--It’s very important and essential to take a backup of the keys and certificates in order to restore or attach the encrypted database on another SQL Server instance after restoring these keys\certificates there.
--Please note, enabling encryption at cell level has overhead and performance penalties (as it is a resource intensive operation and salts the data differently, which causes a table scan) as discussed above and hence it’s recommended to first evaluate the need and
--then plan for its implementation.



-- drop table Customer
--go
CREATE TABLE dbo.Customer
(
CustomerID   INT IDENTITY PRIMARY KEY,
FirstName   VARCHAR(100),
LastName   VARCHAR(100),
CreditCardNumber   Varchar(128) NULL
)
GO
 INSERT INTO dbo.Customer(FirstName,   LastName, CreditCardNumber)
VALUES
('Steve', 'Savage','1111-1111-1111-1111'),
('Ranjit', 'Srivastava','2222-2222-2222-2222'),
('Akram', 'Haque', '3333-3333-3333-3333')
GO

SELECT * from Customer
go


-----------FROM HERE Actually Steps Start

ALTER TABLE Customer
ADD DecCreditCardNumber VARBINARY(256)
GO


UPDATE Customer
SET DecCreditCardNumber = ENCRYPTBYKEY(KEY_GUID('EncryptColumn'),CreditCardNumber)
GO

--CHECK DATA
SELECT * from Customer

SELECT CustomerID,FirstName,LastName,CreditCardNumber,DecCreditCardNumber, CONVERT(VARCHAR(50),DECRYPTBYKEY(DecCreditCardNumber)) AS DecryptSecondCol
from Customer

CLOSE SYMMETRIC KEY   EncryptColumn;
GO

 --CHeck any Indexes, FK, Constraints
 --Backups

--Backup Database or table
Alter table Customer
drop Column CreditCardNumber


 Alter table Customer
 --sp_rename 'table_name.old_column_name', 'new_column_name', 'COLUMN';

 sp_RENAME 'Customer.DecCreditCardNumber' , 'CreditCardNumber', 'COLUMN'

 --CHeck any Indexes, FK, Constraints

SELECT * from Customer
go

 SELECT CustomerID,FirstName,LastName,CreditCardNumber, CONVERT(VARCHAR(50),DECRYPTBYKEY(CreditCardNumber)) AS DecryptSecondCol
from Customer








--if you're using cell encryption then you need to call the encryptby or decryptby methods for each insert\select. The app code would need to reflect this
 --Insert method: Joe need to change
Declare @CreditCardNumber Varchar(128)
SET @CreditCardNumber ='123456789087'
INSERT INTO dbo.Customer(FirstName,   LastName, CreditCardNumber)
VALUES
('Kari', 'Singh', EncryptByKey(Key_GUID('EncryptColumn'),@CreditCardNumber ))




OPEN SYMMETRIC KEY EncryptColumn DECRYPTION
BY CERTIFICATE CertificateforColumns
SELECT CustomerID,FirstName,LastName, CreditCardNumber,CONVERT(VARCHAR(50),DECRYPTBYKEY(CreditCardNumber)) AS DecryptSecondCol
from Customer

SELECT *
from Customer
CLOSE SYMMETRIC KEY   EncryptColumn;
GO

How to implement Encryption in Sensitive data column

USE master
GO
CREATE DATABASE CellLevelEncryptionDemo
GO
Use   CellLevelEncryptionDemo
GO
--create a database master key (DMK) for the master database
CREATE MASTER KEY ENCRYPTION BY PASSWORD = 'F#4*KN^7$';
GO

--create a certificate for use as the database encryption key   (DEK) protector and is protected by the DMK.
CREATE CERTIFICATE CertificateforColumns
WITH SUBJECT = 'Certificate for Column Level encryption';
GO


--Create a symmetric key and encrypt it by using above created   certificate
CREATE SYMMETRIC KEY   EncryptColumn
     WITH ALGORITHM   = AES_256 --Supported encryption algorithms are AES with 128-bit,   192?bit, or 256?bit keys or 3 Key Triple DES
     ENCRYPTION BY   CERTIFICATE CertificateforColumns;
GO


--To decrypt the encrypted data you need to use another inbuilt   function called DecryptByKey
--When you try encrypting or decrypting data without opening the symmetric key, the command will not fail but also, it will not work; for example the command given below will not fail as symmetric is not open but it will return NULLs instead of data:
--It’s very important and essential to take a backup of the keys and certificates in order to restore or attach the encrypted database on another SQL Server instance after restoring these keys\certificates there.
--Please note, enabling encryption at cell level has overhead and performance penalties (as it is a resource intensive operation and salts the data differently, which causes a table scan) as discussed above and hence it’s recommended to first evaluate the need and
--then plan for its implementation.



-- drop table Customer
--go
CREATE TABLE dbo.Customer
(
CustomerID   INT IDENTITY PRIMARY KEY,
FirstName   VARCHAR(100),
LastName   VARCHAR(100),
CreditCardNumber   Varchar(128) NULL
)
GO
 INSERT INTO dbo.Customer(FirstName,   LastName, CreditCardNumber)
VALUES
('Steve', 'Savage','1111-1111-1111-1111'),
('Ranjit', 'Srivastava','2222-2222-2222-2222'),
('Akram', 'Haque', '3333-3333-3333-3333')
GO

SELECT * from Customer
go


-----------FROM HERE Actually Steps Start

ALTER TABLE Customer
ADD DecCreditCardNumber VARBINARY(256)
GO


UPDATE Customer
SET DecCreditCardNumber = ENCRYPTBYKEY(KEY_GUID('EncryptColumn'),CreditCardNumber)
GO

--CHECK DATA
SELECT * from Customer

SELECT CustomerID,FirstName,LastName,CreditCardNumber,DecCreditCardNumber, CONVERT(VARCHAR(50),DECRYPTBYKEY(DecCreditCardNumber)) AS DecryptSecondCol
from Customer

CLOSE SYMMETRIC KEY   EncryptColumn;
GO

 --CHeck any Indexes, FK, Constraints
 --Backups

--Backup Database or table
Alter table Customer
drop Column CreditCardNumber


 Alter table Customer
 --sp_rename 'table_name.old_column_name', 'new_column_name', 'COLUMN';

 sp_RENAME 'Customer.DecCreditCardNumber' , 'CreditCardNumber', 'COLUMN'

 --CHeck any Indexes, FK, Constraints

SELECT * from Customer
go

 SELECT CustomerID,FirstName,LastName,CreditCardNumber, CONVERT(VARCHAR(50),DECRYPTBYKEY(CreditCardNumber)) AS DecryptSecondCol
from Customer








--if you're using cell encryption then you need to call the encryptby or decryptby methods for each insert\select. The app code would need to reflect this
 --Insert method: Joe need to change
Declare @CreditCardNumber Varchar(128)
SET @CreditCardNumber ='123456789087'
INSERT INTO dbo.Customer(FirstName,   LastName, CreditCardNumber)
VALUES
('Kari', 'Singh', EncryptByKey(Key_GUID('EncryptColumn'),@CreditCardNumber ))




OPEN SYMMETRIC KEY EncryptColumn DECRYPTION
BY CERTIFICATE CertificateforColumns
SELECT CustomerID,FirstName,LastName, CreditCardNumber,CONVERT(VARCHAR(50),DECRYPTBYKEY(CreditCardNumber)) AS DecryptSecondCol
from Customer

SELECT *
from Customer
CLOSE SYMMETRIC KEY   EncryptColumn;
GO

Thursday, August 18, 2016

Clearing Wait Stats and Procedure Cache won’t affect Index Stats , tested and confirmed in UAT.

Clearing Wait Stats and Procedure Cache won’t affect Index Stats , tested and confirmed in UAT.

NOLOCK will propagate to the tables used by the view definition

USE AdventureWorks2008R2;
GO
CREATE VIEW EmployeeName AS
SELECT h.BusinessEntityID, p.LastName, p.FirstName
FROM HumanResources.Employee AS h With  
JOIN Person.Person AS p  
ON h.BusinessEntityID = p.BusinessEntityID;
GO


--select from view
SELECT * from EmployeeName WITH (NOLOCK)



 NOLOCK will propagate to the tables used by the view definition 

Wednesday, August 17, 2016

Finding Implicit Column Conversions in the Plan Cache

SET TRANSACTION ISOLATION LEVEL READ UNCOMMITTED

DECLARE
@dbname SYSNAME
SET
@dbname = QUOTENAME(DB_NAME());
WITH XMLNAMESPACES
   (DEFAULT 'http://schemas.microsoft.com/sqlserver/2004/07/showplan') SELECT
  
stmt.value('(@StatementText)[1]', 'varchar(max)'),
  
t.value('(ScalarOperator/Identifier/ColumnReference/@Schema)[1]', 'varchar(128)'),
  
t.value('(ScalarOperator/Identifier/ColumnReference/@Table)[1]', 'varchar(128)'),
  
t.value('(ScalarOperator/Identifier/ColumnReference/@Column)[1]', 'varchar(128)'),
  
ic.DATA_TYPE AS ConvertFrom,
  
ic.CHARACTER_MAXIMUM_LENGTH AS ConvertFromLength,
  
t.value('(@DataType)[1]', 'varchar(128)') AS ConvertTo,
  
t.value('(@Length)[1]', 'int') AS ConvertToLength,
  
query_plan
FROM sys.dm_exec_cached_plans AS cp
CROSS APPLY sys.dm_exec_query_plan(plan_handle) AS qp
CROSS APPLY query_plan.nodes('/ShowPlanXML/BatchSequence/Batch/Statements/StmtSimple') AS batch(stmt)
CROSS
APPLY stmt.nodes('.//Convert[@Implicit="1"]') AS n(t) JOIN INFORMATION_SCHEMA.COLUMNS AS ic
   ON QUOTENAME(ic.TABLE_SCHEMA) = t.value('(ScalarOperator/Identifier/ColumnReference/@Schema)[1]', 'varchar(128)')
   AND
QUOTENAME(ic.TABLE_NAME) = t.value('(ScalarOperator/Identifier/ColumnReference/@Table)[1]', 'varchar(128)')
   AND
ic.COLUMN_NAME = t.value('(ScalarOperator/Identifier/ColumnReference/@Column)[1]', 'varchar(128)') WHERE t.exist('ScalarOperator/Identifier/ColumnReference[@Database=sql:variable("@dbname")][@Schema!="[sys]"]') = 1



http://sqlblog.com/blogs/jonathan_kehayias/archive/2010/01/08/finding-implicit-column-conversions-in-the-plan-cache.aspx

https://blog.sqlauthority.com/2009/06/27/sql-server-fix-error-17892-logon-failed-for-login-due-to-trigger-execution-changed-database-context...