Wednesday, March 21, 2012
Replication Best Practice Question: Split Publications
The current design is a single publication, including tables, procs and functions (no views yet). All told there are about 686 articles in the publication. The tables are horizontally partitioned using dynamic filters based on the hostname of the subscriber. There are around 50 subscribers. Most will have small databases (< 250 MB). A few (2-3) will have much larger databases. I am mostly concerned that whenever I have to update a proc or function, I have to re-initialize the subscriptions and that pushes a TON of data out over the network and may interrupt service at the subscriber locations.
I see three options for the procs and functions:
1. Include them in the publication with the data tables
2. Place them in a separate (snapshot only) publication
3. Exclude them entirely from replication and maintain them manually
I am starting to lean towards option #2; but I am a bit concerned about maintaining a duplicate set of replication agents for each subscriber.
Any thoughts and/or comments?
Regards,
hmscott
Am I overlooking something? Is there an option that I have not considered?I maintain them manually. All replication does is replace procs, and I can do that without involving replication.|||I maintain them manually. All replication does is replace procs, and I can do that without involving replication.
How many subscribers?|||Ok - I don't really feel qualified on this subject and don't have that much related experience (I use replication to get disparate data sources into one single, read only point rather than the other way round) however you aren't getting much joy so I might as well wade in as not.
I have read a couple of articles that advocate an approach very like point 2. One, as I recall, suggested one publication per article type which does seem a bit over the top. Anway - although this would require administration of more agents it sounds preferable to administering everything manually.
Can I find my sources? Can I buggery - you will just have to take my word for it that I have read at least two articles that suggest this very set up. I will continue hunting through my favourites.|||Pootle -
Thanks for your comments. I'm going to experiment a bit with this approach in the next fiew days.
Is it me, or does it seem that there's a dearth of information on the net regarding SQL replication? I have found a few sites (http://www.replicationanswers.com/Default.asp, http://www.dyessconsulting.com, and a few articles in http://www.sqlservercentral.com), but it just seems that there's not a whole lot of "hard" information (ie, mostly it's a lot of generic stuff with few practical examples).
Thanks again, I'll try to remember to post back with an update as things progress.
Regards,
hmscott|||That is my experience too. I believe Hilary Cotter is extremely knowledgeable and also active on some forum(s) if you really get no joy here.
Tuesday, March 20, 2012
Replication and power outages/offline servers
Currently, the servers are in the same building, which is prone to power outages. When a power outage occurs, how are the publications going to act once power is restored? Will we have to reinitialize the publications? This would be bad considering one of our publications takes around four hours to deliver a complete snapshot. What happens if one of the clusters where to go offline? When it comes back would it continue to replicate, or would the publication need to be reinitialized? Curious if anyone has any experience with this.
I've tried searching for any articles on this topic and have not been successful. If anyone knows of any, point me in that direction. Thanks!
-mike
replication should pick up where it left off in case of power or network outage. reinits should not happen due to those issues.
Replication and power outages/offline servers
Currently, the servers are in the same building, which is prone to power outages. When a power outage occurs, how are the publications going to act once power is restored? Will we have to reinitialize the publications? This would be bad considering one of our publications takes around four hours to deliver a complete snapshot. What happens if one of the clusters where to go offline? When it comes back would it continue to replicate, or would the publication need to be reinitialized? Curious if anyone has any experience with this.
I've tried searching for any articles on this topic and have not been successful. If anyone knows of any, point me in that direction. Thanks!
-mikereplication should pick up where it left off in case of power or network outage. reinits should not happen due to those issues.
Replication and Log Size problems
typically have the log size at 1 GB.
The DB is replicating two publications to two different
servers.
It appears that re-dexing is causing our log file to grow
too large and then the replication log reader cannot read
it.
In order to get the log down to normal size again, I have
to do the following. Kill one of the publications
because two log readers are reading the log.
1) EXEC sp_repldone
2) BACKUP LOG Prod_iVend
WITH TRUNCATE_ONLY
EXEC sp_repldone
Jay,
to remove the inactive part of the log you can backup the log, assuming the
logreader has read the records. So, to reduce the log size I'd increase the
frequency of your log backup. You shouldn't have to remove the publication
to be able to reduce the log file size. In what way is the log reader
failing. If it is a timeout, you can increase the -QueryTimeOut value. You
might also consider fixing a value for the -ReadBatchThreshold.
HTH,
Paul Ibison
|||It appears that re-indexing is causing the log to grow so
much. We have
broken up reindexing so that only a few large tables are
done each night
to minimize the impact.
What happens is that the log grows to a size that the log
reader cannot
suck it in and then we get these erros.
The process could not
execute 'sp_repldone/sp_replcounters' on '(server
name)'.
Another log reader is replicating the database.
(Source: (server name) (Data source); Error number: 18752)
The process could not set the last distributed transaction.
(Source: (server name) -(job name) (Agent); Error number:
22017)
Because there are two publications coming from the DB, it
can only be
fixed by removing one of them completely. I cannot shrink
the log until
I do that.
Any ideas?
>--Original Message--
>Jay,
>to remove the inactive part of the log you can backup the
log, assuming the
>logreader has read the records. So, to reduce the log
size I'd increase the
>frequency of your log backup. You shouldn't have to
remove the publication
>to be able to reduce the log file size. In what way is
the log reader
>failing. If it is a timeout, you can increase the -
QueryTimeOut value. You
>might also consider fixing a value for the -
ReadBatchThreshold.
>HTH,
>Paul Ibison
>
>.
>
|||Assuming nobody is running sp_replcmds in QA, then the other log reader
being referred to is the other transactional publication. Try stopping one
log reader and set the other one off, so as to stagger the processes. Once
one has finished reading, run the pther then backup and shrink.
Also change the 2 parameters mentioned earlier.
HTH,
Paul Ibison
|||whenever you run sp_replcmds yourself you MUST run a sp_replflush before running it or you will get errors like this.
The only clean way of fixing the multiple log reader agent problem is to stop the SQL Server agent, check task manager to see all the logread.exe's go away. Then restart the SQL Server agent.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
"Paul Ibison" wrote:
> Assuming nobody is running sp_replcmds in QA, then the other log reader
> being referred to is the other transactional publication. Try stopping one
> log reader and set the other one off, so as to stagger the processes. Once
> one has finished reading, run the pther then backup and shrink.
> Also change the 2 parameters mentioned earlier.
> HTH,
> Paul Ibison
>
>
|||With what I am seeing, that is not possible. If I stop the
log reader agent for one, it stops for log reader for the
other one automatically. I think they are the same log
reader, even though they are two different publications.
>--Original Message--
>Assuming nobody is running sp_replcmds in QA, then the
other log reader
>being referred to is the other transactional publication.
Try stopping one
>log reader and set the other one off, so as to stagger
the processes. Once
>one has finished reading, run the pther then backup and
shrink.
>Also change the 2 parameters mentioned earlier.
>HTH,
>Paul Ibison
>
>.
>
|||I think the real problem is that the Log file is just too
big. I have seen it expand to 14 GB and then the log
reader agent just chews on it forever. It is a 4 GB ram
server with 3 GB allocated for SQL Server.
Is there any BK article on this being an issue? I cannot
find anything.
>--Original Message--
>whenever you run sp_replcmds yourself you MUST run a
sp_replflush before running it or you will get errors like
this.
>The only clean way of fixing the multiple log reader
agent problem is to stop the SQL Server agent, check task
manager to see all the logread.exe's go away. Then restart
the SQL Server agent.[vbcol=seagreen]
>--
>Looking for a SQL Server replication book?
>http://www.nwsu.com/0974973602.html
>
>
>"Paul Ibison" wrote:
other log reader[vbcol=seagreen]
publication. Try stopping one[vbcol=seagreen]
the processes. Once[vbcol=seagreen]
shrink.
>.
>
|||This doesn't sound correct. The log reader agent is intelligent enough to know the LSN (log sequence number) it last read and marked as replicated, and then starts reading from there
Issue a sp_repltrans to get an idea of how many transactions remain unread in the log.
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
"Jay" wrote:
> I think the real problem is that the Log file is just too
> big. I have seen it expand to 14 GB and then the log
> reader agent just chews on it forever. It is a 4 GB ram
> server with 3 GB allocated for SQL Server.
> Is there any BK article on this being an issue? I cannot
> find anything.
>
> sp_replflush before running it or you will get errors like
> this.
> agent problem is to stop the SQL Server agent, check task
> manager to see all the logread.exe's go away. Then restart
> the SQL Server agent.
> other log reader
> publication. Try stopping one
> the processes. Once
> shrink.
>
|||there seems to be another bug here. when you create two publications in the
same database there are actually two log reader agents created. Only one log
reader agent runs though - that is the later one.
However check in the mangement folder and you will see two log reader agent
jobs for this publication database.
The bug is if you run a sp_helppublication for your publications each one
will report the name of the log reader agent which they created.
The problem/bug is, if that sp_helppublication should be reporting the log
reader agent in use.
It is possible that you have started two instances of the same log reader,
or you have started two log readers for two publications in the same
publication database. This is only possibly if you try to start it from the
management folder or start the jobs manually.
Again, bounce SQL Server agent to correct the problem.
Hilary Cotter
Looking for a book on SQL Server replication?
http://www.nwsu.com/0974973602.html
"Jay" <anonymous@.discussions.microsoft.com> wrote in message
news:208e701c45946$0276f630$a401280a@.phx.gbl...[vbcol=seagreen]
> With what I am seeing, that is not possible. If I stop the
> log reader agent for one, it stops for log reader for the
> other one automatically. I think they are the same log
> reader, even though they are two different publications.
>
> other log reader
> Try stopping one
> the processes. Once
> shrink.
Friday, March 9, 2012
Replication - Snapshots
I am looking at setting up a solution that relies on replication of data to
many publications. I believe what I need is merge replication. In this
scenario, many companies will receive data from the master publication but
each company should ONLY receive information for their concern. How do I get
the replication of these 250 companies to "configurably" only receive the
data they need.
Thanks in advance
what you are describing is a filtered publication.
You have two options create a publication which filters rows based on some
hardcoded value, is SalesID=123213, or using a dynamic filter where the
publication detects some property about the subscriber and only sends rows
to that subscriber, ie hostname() or @.@.servername. Dynamic filters work best
with pull subscriptions.
The technology is called horizontal partitioning, as you are creating your
publication you will get to a dialog called Customize the Properties of the
Publication. Select Yes, and then select Horizontally, by filtering the rows
of published data. Enter a filter clause that will filter the rows so that
only the rows you want to end up on the subscriber land there.
It also sounds like what you need is transactional replication
"Murray Foxcroft" <murray.foxcroft@.ast.co.za> wrote in message
news:OplWrAzFEHA.2980@.TK2MSFTNGP09.phx.gbl...
> Hi there,
> I am looking at setting up a solution that relies on replication of data
to
> many publications. I believe what I need is merge replication. In this
> scenario, many companies will receive data from the master publication but
> each company should ONLY receive information for their concern. How do I
get
> the replication of these 250 companies to "configurably" only receive the
> data they need.
> Thanks in advance
>
Monday, February 20, 2012
Replicating Stored Procedure Alterations
I've created two publications. One publication contains the tables...the second publication contains that stored procedures that work with the tables of the first publication. In another database I subscribe to the two publications...the table publicati
on first followed by the stored procedure publication. I'd like to now make an alteration to the stored procedure on the publisher using ALTER PROC and have that schema change replicated over to the subscriber. I perform the ALTER PROC on the publisher
but the change does not appear on the subscriber. How can I get the ALTER PROC to take affect on the subscriber?
Thanks
Jerry
have a look at sp_addscriptexec for this
Looking for a SQL Server replication book?
http://www.nwsu.com/0974973602.html
|||Hillary,
Thanks. There is an item that discusses this to some degree at the SQL Mag site instant doc #26344 but I'm out of town and do not know my login info. Looks like it recommends reinitalization. Which sounds ok for the proc subscription but a default of a
drop for the proc would delete the permissions as well...agree? That doesn't sound like the best solution to me. I had thought of using the on-demand script execution as a workaround. Just wanted to know if I was missing something. Guess not?
Also, looks like you have some useful information in your books on replication. What are the costs and how are they obtainable? By the way, I'm a SQL instructor (in replication class this week). Are you willing to provide me with a copy of both replica
tion books no-fee? If so, I'd be willing to share them as a potential purchasable resource for my students in my replication classes.
Let me know.
Thanks again.
Jerry
|||You don't need to reinitialize when running sp_addscriptexec.
it will merely distribute and execute a script to all of your subscribers.
you can reinitialize if you want, but you don't have to

The book will be available on amazon eventually. It will also be available on nswu.com at some point in time as well. I'll contact you offline about your other request.
It has to be printed first

|||Right...wasn't implying that I needed to reinit when using the sp proc. Was comparing two different approaches to resolve the issue. I think the sp proc is the easiest to implement and work with.
Jerry