Friday, March 30, 2012
Immediate Updating Subscriber not Queuing updates
I have a problem with replication that I was hoping someone might be able to
shed some light on. Heres the scenario.
Server A, setup as a publisher + distributor. A publication was created and
marked for transaction replication with immediate updating with queued as a
failover.
Server B is the only subscriber. I had to run sp_link_publication in order
to get it to work.
I can successfully replication changes from A - B, and B - A. If B goes
down, and comes back up, it will re-sync no problem. My problem is that when
A goes down, I am not getting anything saved into the queue. When I try to
do any updating, all I get is 'SQL Server does not exist, etc'. There are no
other triggers on the table.
Thoughts?
One more thing, this is SQL 2000 SP3 (.818)
"MJM" wrote:
> Hey all,
> I have a problem with replication that I was hoping someone might be able to
> shed some light on. Heres the scenario.
> Server A, setup as a publisher + distributor. A publication was created and
> marked for transaction replication with immediate updating with queued as a
> failover.
> Server B is the only subscriber. I had to run sp_link_publication in order
> to get it to work.
> I can successfully replication changes from A - B, and B - A. If B goes
> down, and comes back up, it will re-sync no problem. My problem is that when
> A goes down, I am not getting anything saved into the queue. When I try to
> do any updating, all I get is 'SQL Server does not exist, etc'. There are no
> other triggers on the table.
> Thoughts?
|||you have to manually failover your subscriber from using immediate updating
to using queued updating. There is no automatic failover.
Hilary Cotter
Looking for a book on SQL Server replication?
http://www.nwsu.com/0974973602.html
"MJM" <MJM@.discussions.microsoft.com> wrote in message
news:EDD81CC4-9696-4AAB-A468-B8734DC08473@.microsoft.com...[vbcol=seagreen]
> One more thing, this is SQL 2000 SP3 (.818)
> "MJM" wrote:
able to[vbcol=seagreen]
and[vbcol=seagreen]
as a[vbcol=seagreen]
order[vbcol=seagreen]
when[vbcol=seagreen]
to[vbcol=seagreen]
are no[vbcol=seagreen]
Monday, March 19, 2012
I'm Stumped (Complex Joins With and Updates)
My head is spinning trying to figure this one out. Ok. Here is the schema...
--
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [snmpPoller_HostConfig](
[id] [int] IDENTITY(1,1) NOT NULL,
[friendlyName] [nvarchar](50) NOT NULL,
[hostname] [nvarchar](255) NOT NULL,
[community] [nvarchar](255) NOT NULL,
[version] [int] NOT NULL,
[queryTimeout] [int] NOT NULL,
[isActive] [bit] NOT NULL,
[passedSanityCheck] [bit] NOT NULL CONSTRAINT [DF_snmpPoller_HostConfig_passedSanityCheck] DEFAULT ((0)),
CONSTRAINT [PK_snmpPoller_HostConfig] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
GO
SET ANSI_NULLS ON
GO
SET QUOTED_IDENTIFIER ON
GO
CREATE TABLE [snmpPoller_TargetConfig](
[id] [int] IDENTITY(1,1) NOT NULL,
[hostId] [int] NOT NULL,
[description] [nvarchar](50) NOT NULL,
[oid] [nvarchar](255) NOT NULL,
[interval] [int] NOT NULL,
[lastPollTimestamp] [datetime] NOT NULL,
[nextPollTime] AS (dateadd(second,[interval],[lastPollTimestamp])),
[isActive] [bit] NOT NULL,
[isPolling] [bit] NOT NULL,
CONSTRAINT [PK_snmpPoller_TargetConfig] PRIMARY KEY CLUSTERED
(
[id] ASC
)WITH (IGNORE_DUP_KEY = OFF) ON [PRIMARY]
) ON [PRIMARY]
GO
ALTER TABLE [dbo].[snmpPoller_TargetConfig] WITH CHECK ADD CONSTRAINT [FK_snmpPoller_TargetConfig_snmpPoller_HostConfig] FOREIGN KEY([hostId])
REFERENCES [snmpPoller_HostConfig] ([id])
Ok. So I have this query
WITH WorkTable ( Tag, Parent, [Host!1!hostId],
[Target!2!targetId], [Host!1!hostname], [Host!1!community],
[Host!1!version], [Host!1!timeout], [Target!2!oid], [Target!2!interval]
)
AS
(
SELECT DISTINCT
1 AS Tag,
NULL AS Parent,
H.id AS [Host!1!hostId],
NULL AS [Target!2!targetId],
hostname AS [Host!1!hostname],
community AS [Host!1!community],
version AS [Host!1!version],
queryTimeout AS [Host!1!timeout],
NULL AS [Target!2!oid],
NULL AS [Target!2!interval]
FROM snmpPoller_HostConfig H, snmpPoller_TargetConfig T
WHERE H.id = T.hostId
AND t.isPolling = 0
AND t.isActive = 1
AND h.isActive = 1
AND t.lastPollTimestamp < { fn NOW() }
AND h.passedSanityCheck = 1
UNION ALL
SELECT DISTINCT
2 AS Tag,
1 AS Parent,
hostId AS [Host!1!hostId],
T.id AS [Target!2!targetId],
NULL AS [Host!1!hostname],
NULL AS [Host!1!community],
NULL AS [Host!1!version],
NULL AS [Host!1!timeout],
oid AS [Target!2!oid],
interval AS [Target!2!interval]
FROM snmpPoller_HostConfig H, snmpPoller_TargetConfig T
WHERE H.id = T.hostId
AND t.isPolling = 0
AND t.isActive = 1
AND h.isActive = 1
AND t.lastPollTimestamp < { fn NOW() }
AND h.passedSanityCheck = 1
)
SELECT * FROM WorkTable
ORDER BY [Host!1!hostId], [Target!2!targetId]
FOR XML EXPLICIT, ROOT ('Hosts')
That generates this xml
<Hosts>
<Host hostId="6" hostname="XXX.XXX.XXX.XXX" community="XXXXX" version="2" timeout="30">
<Target targetId="1757" oid=".1.3.6.1.2.1.31.1.1.1.6." interval="30" />
<Target targetId="1758" oid=".1.3.6.1.2.1.31.1.1.1.10." interval="30" />
<Target targetId="1759" oid=".1.3.6.1.2.1.31.1.1.1.7." interval="30" />
<Target targetId="1760" oid=".1.3.6.1.2.1.31.1.1.1.11." interval="30" />
</Host>
<Host hostId="8" hostname=" XXX.XXX.XXX.XXX " community="XXXXXX" version="2" timeout="30">
<Target targetId="2037" oid=".1.3.6.1.2.1.31.1.1.1.6." interval="30" />
<Target targetId="2038" oid=".1.3.6.1.2.1.31.1.1.1.10." interval="30" />
<Target targetId="2039" oid=".1.3.6.1.2.1.31.1.1.1.7." interval="30" />
<Target targetId="2040" oid=".1.3.6.1.2.1.31.1.1.1.11." interval="30" />
</Host>
</Hosts>
Now the problem is that the xml that I generate is to ‘big’. When I submit a get request to the host, I can only fit 128 OIDs into the request PDU. My query, as it is, returns many hosts with upwards with 1000 targets. So question number 1 is how do return the top 128 targets for each host?
Now here is the real problem. I am replacing a working design in an effort to pick up a scale point. The current design has a dispatcher that will hand off the hostId to a work queue where it is picked up by a ‘thread’. This thread will then execute this sql.
SqlCommand^ command = connection->CreateCommand();
command->Parameters->AddWithValue("@.hostId", hostId);
command->CommandText =
"DECLARE @.resultTable TABLE("+
" targetId int NOT NULL,"+
" hostId int NOT NULL,"+
" oid nvarchar(255) NOT NULL"+
")"+
""+
"UPDATE TOP(128) snmpPoller_TargetConfig"+
" SET isPolling = 1 "+
" OUTPUT INSERTED.id, INSERTED.hostId, INSERTED.oid INTO @.resultTable"+
"WHERE (isPolling = 0) AND (isActive = 1) AND (id = @.hostId) AND (lastPollTimestamp < { fn NOW() })"+
""+
""+
"SELECT * FROM @.resultTable";
SqlDataReader^ reader = command->ExecuteReader();
So rather than make (host * (number of targets / 128)) calls per dispatch cycle, I need to make it all in one call.
So SQL gurus…. What’s the kung-fu? How do I get all my hosts with the top 128 targets AND set a flag on all targets in one atomic shot (I believe this ‘one-shot’ statement can be built off the with(), but I’m nothing but confused at this point)
Paul
Need SQL Advice? http://sqladvice.com
Need RegEx Advice? http://regexadvice.com
Need XML Advice? http://xmladvice.com
I actually just came up with this. It works just fine, but I'd like to consolidate it down to one statement. Any thoughts?
--
BEGIN TRANSACTION
DECLARE @.buffer TABLE
(
Tag int,
Parent int,
[Host!1!hostId] int,
[Target!2!targetId] int,
[Host!1!hostname] nvarchar(255),
[Host!1!community] nvarchar(255),
[Host!1!version] int,
[Host!1!timeout] int,
[Target!2!oid] nvarchar(255),
[Target!2!interval] int
);
WITH WorkTable ( Tag, Parent, [Host!1!hostId],
[Target!2!targetId], [Host!1!hostname], [Host!1!community],
[Host!1!version], [Host!1!timeout], [Target!2!oid], [Target!2!interval]
)
AS
(
SELECT DISTINCT
1 AS Tag,
NULL AS Parent,
H.id AS [Host!1!hostId],
NULL AS [Target!2!targetId],
hostname AS [Host!1!hostname],
community AS [Host!1!community],
version AS [Host!1!version],
queryTimeout AS [Host!1!timeout],
NULL AS [Target!2!oid],
NULL AS [Target!2!interval]
FROM snmpPoller_HostConfig H, snmpPoller_TargetConfig T
WHERE H.id = T.hostId
AND t.isPolling = 0
AND t.isActive = 1
AND h.isActive = 1
AND t.lastPollTimestamp < { fn NOW() }
AND h.passedSanityCheck = 1
UNION ALL
SELECT DISTINCT
2 AS Tag,
1 AS Parent,
hostId AS [Host!1!hostId],
T.id AS [Target!2!targetId],
NULL AS [Host!1!hostname],
NULL AS [Host!1!community],
NULL AS [Host!1!version],
NULL AS [Host!1!timeout],
oid AS [Target!2!oid],
interval AS [Target!2!interval]
FROM snmpPoller_HostConfig H, snmpPoller_TargetConfig T
WHERE H.id = T.hostId
AND t.isPolling = 0
AND t.isActive = 1
AND h.isActive = 1
AND t.lastPollTimestamp < { fn NOW() }
AND h.passedSanityCheck = 1
AND t.id IN(
SELECT TOP(128) id FROM snmpPoller_TargetConfig
WHERE isPolling = 0 AND isActive = 1 AND lastPollTimestamp < { fn NOW() }
AND hostId = h.id
)
)
INSERT INTO @.buffer
SELECT * FROM WorkTable
UPDATE t SET isPolling = 1
FROM snmpPoller_TargetConfig t,
@.buffer b
WHERE
t.id = b.[Target!2!targetId]
SELECT * FROM @.buffer
ORDER BY [Host!1!hostId], [Target!2!targetId]
FOR XML EXPLICIT, ROOT ('Hosts')
COMMIT TRANSACTION
Sorry, I can't post the data, the schema should be suffecient. Just say there for 100 hosts records and 750 target records per host.
|||Please take a look at the OUTPUT clause. You can just perform the UPDATE statement and get the relevant columns (even expressions) from the inserted/deleted tables into a table variable for example. You can then do the SELECT against the table variable. Please take a look at my blog post below for one such example:
http://blogs.msdn.com/sqltips/archive/2005/06/13/OUTPUT_clause.aspx
Monday, March 12, 2012
Im flooding my SQL Server with INSERT / UPDATE requests. How do I optimize?
After a the first several hundred requests, the SQL server is bogging down (processor at 90-100%) and the application slows down while it waits on SQL to update the database.
What would be the best way to optimize this app? Would it help to loop through all my insert/update requests and then send them as one big batch of statements to the server (per 1000 requests or something)? Is there a better way of doing this?
Thanks!
Here are two approachs we did:
1. Queue up all insert/update, and run batch job as one transaction in one connection, next release of our Lattice.DataMapper will support run batch job in a seperate thread at certain time you define, see midnight tonight.
2. You can send a xml doc with all your insert/update data over to SQL server and call stored procedures using SQL server xml support. So only one stored procedure call you are done.