I was curious as to a means that people like myself could come to an understanding on roughly how this program works... I understand what the goal of it is, but how do you intend to do this?...
- TopRaman
While I have explained a lot of SixDegreeSteam's abstract mechanics in previous articles, I feel they were not programmatically detailed enough to give a good picture of how exactly the program operates. So, Ill try to fill that gap here. There are three components to the project; Ill try to discuss all of them as simply and cleanly as possible.
WARNING: Previous articles have focused on previous versions of the project. Likewise, this article with focus on the current version of the project as of this writing (1.5C). I feel this current version is the most efficient, so it is likely to stick around for a long time. But, this is a disclaimer just in case the project does change again substantially.
The first and (thus far) most time-consuming component is the crawler. Without it, the project would not have a dataset. The crawler has the straight-forward job of collecting links to other profiles from one profile, saving (queueing) those links, then opening them back up sequentially to collect more links. The process continues until all links are collected and thereby all profiles are analyzed. This process of collecting, storing, reading, and repeating is called crawling. The specific crawling logic will not be described here for the sake of brevity, but all the details can be found in an earlier post titled
SixDegreeSteam: Challenges. Program-wise, though, the crawler downloads the pages containing the links, extracts the links using a combination of Regular Expressions and XML parsing, and inserts them into a SQL database table called the crawler queue. The crawler also extracts some basic profile information, such as SteamID, profile name, and avatar, in a similar manner and inserts it all into another table called the user dataset (or group dataset). The complete list of datums stored per user is SteamID, last crawl time (to prevent recrawling a user too frequently), profile name, avatar, friends, and group memberships. The complete list of datums stored per group is similar: SteamID, last crawl time, group name, avatar, and members.
Given that the crawler does its job correctly, we are left with a database filled (and I mean
FILLED) with information about the users and groups of the Steam Community. Oddly enough, the information is pretty inconsequential without a way to harness the datasets. So, our second component, aptly named Pathfinder, is arguably as important as the crawler. Pathfinder has the soul purpose of using the information available through the database to calculate a lowest-cost path between two given nodes. To do this, Ive opted for an object-oriented rendition of the
breadth-first search graph theory algorithm. Once again, for the sake of brevity, I wont go into detail about the algorithm. If you are curious about it, follow the link or run a Google search. There are tons of articles that have covered it far better than I can. After the algorithm is applied, a list of users and groups used to reach profile B from profile A the quickest is displayed with links to each profile and avatars for recognition purposes.
The third and newest addition to the component set is a sort of network browser. Using a graphical, navigable web of the dataset, users will be able to quickly traverse the entire Steam Community social network, allowing them to get a better idea of just where they fit into it all. This component is simply an auxiliary to the project and is not a real focus. As such, it will be the last to be implemented and will only even enter the development picture after the crawler and Pathfinder are thoroughly completed.
As was said about Pathfinder, the information gathered by the crawler is pretty useless to the end user until a method of putting it to work is created. This explains why the crawler has been operational for nearly a month now, yet the site is still empty.