I wasn't sure how to ask the question. But basically, it's a textbook scenario. I'm working on a site that's article based, but the article information is stored in a database. Then the page is rendered with the information in the database based on the requested article id:
For example: http://www.mysite.com/articles/9851
I'm new to SEO, so I'm wondering how engines are able to crawl the contents of pages like this and/or what I need to do in order to ensure that it will be crawled.
So for instance, this site. All of the articles/posts on this site appear to live in a database somewhere. The URL has an ID which looks like it is used to tell the server which data to use to generate the page -- so the page doesn't actually exist somewhere, but it's template does. When I search google, I might find one of these posts based on the content of the post.
I understand that crawlers normally just find a page and follow it's links and follow its links' links and so on, but how does that work when the site is search based like this? Do have to create a page that randomly picks articles out of the database so that the crawler can see it or something?