22

So I have, what would seem like a common question that I can't seem to find an answer to. I'm trying to find what is the "best practice" for how to architect a database that maintains data locally, then syncs that data to a remote database that is shared between many clients. To make things more clear, this remote database would have many clients that use it.

For example, if I had a desktop application that stored to-do lists (in SQL) that had individual items. Then I want to be able to send that data to a web-service that had a "master" copy of all the different clients information. I'm not worried about syncing problems as much as I am just trying to think through actual architecture of the client's tables and the web-services tables

Here's an example of how I was thinking about it:

Client Database

list
--list_client_id (primary key, auto-increment)
--list_name

list_item
--list_item_client_id (primary key, auto-increment)
--list_id
--list_item_text

Web Based Master Database (Shared between many clients)

list
--list_master_id
--list_client_id (primary key, auto-increment)
--list_name
--user_id


list_item
--list_item_master_id (primary key, auto-increment)
--list_item_remote_id
--list_id
--list_item_text
--user_id

The idea would be that the client can create todo lists with items, and sync this with the web service at any given time (i.e. if they lose data connectivity, and aren't able to send the information until later, nothing will get out of order). The web service would record the records with the clients id's as just extra fields. That way, the client can say "update list number 4 with a new name" and the server takes this to mean "update user 12's list number 4 with a new name".

JoeCortopassi
  • 588
  • 2
  • 4
  • 11
  • 1
    Is this a one-way update from client to web service or would there need to be changes pushed to the client? – JeffO Jan 31 '12 at 18:34
  • Yeah, it would need to be a two way street. The client would be acting like a cache for the web server – JoeCortopassi Jan 31 '12 at 18:58
  • it sounds like you're talking about replication. I know it's used at my place of work to replicate certains tables/rows down to many remote (smaller) databases – Antony Scott Feb 09 '12 at 19:57

2 Answers2

2

We had a similar case. The solution was kind of unusual, each database, wheter main server or local client, had a copy of the same database structure or schema. (Same tables with same fields and same field types).

We add a client table catalog:

create table client
{
   client_key int primary key,
   client_title varchar(200)
}

The primary key of each table was integer, but, managed, as it was string / char of a fixed type, with integers up to 9999 9999 9999 9999

Its length was big enough to support a large qty. of records.

The numbers where stored in reverse, in order to support padding with zeroes. If we want to store a "4", it should be like this: "4000 0000 0000 0000".

The first 3 digits store the key of the table, padded with 0. The rest of the 12 digits stored the incremental value of the key padded with zeroes.

So, the following data:

Client Sequential Number <other fields>
1      1                 ...
2      1                 ...
3      1                 ...
1      2                 ...
2      2                 ...
3      2                 ...
1      3                 ...
2      3                 ...
3      3                 ...

Was stored in the primary key, of each table, as this:

Table_Key        <other fields>
1000000000001000 ...
1000000000002000 ...
1000000000003000 ...
2000000000001000 ...
2000000000002000 ...
2000000000003000 ...
3000000000001000 ...
3000000000002000 ...
3000000000003000 ...

In order to avoid primary key collisions.

umlcat
  • 2,146
  • 11
  • 16
  • you can use other methods to avoid collisions: one is setting only ODD ids on the client side and EVEN ids on the server's side, another option will be to use a "string-id" which will have a different prefix, ie. cli- vs. serv- – Nir Alfasi Feb 12 '12 at 21:00
0

I understand you want a local store e.g. local copy of todo list then upload the data, syncing back to the master database. Writing the services isn't the hard part, but managing the syncing between databases. I've recently being looking into nServiceBus http://nservicebus.com/ to improve the management of data syncing between systems. It's area I'm interesting in but have yet to have found the best aproach or magic bullet. I'd recommend taking a look into nServiceBus.

Nickz
  • 1,430
  • 1
  • 13
  • 18