The disksummary branch has an entirely new implementation of IMAP, which was presumptiously named IMAPX. For eXtra something?
Anyway, it is complex enough to deserve some explanation.
Most of the work communicating with the server itself goes through the CamelIMAPXServer object. This has a thread-safe api on one side, and potentially a thread running the tasks on the other.
It basically does all of the heavy lifting, it processes LIST and FETCH commands, it manages and updates folders, the cache, and all of the communication with the server.
A rough diagram below shows the general architecture of the IMAPX code.
The simple folder and store-friendly interfaces exposed by CamelIMAPXServer hide a horrendous amount of detail. Although most parts are reasonbly well defined within themselves.
The breakdown goes something like this:
- external api, creates
- job requests to perform a task, creates
- imap commands which fetch server information, which
- are processed by a command queue
Jobs may trigger other jobs, or operate in multiple passes. Multiple part jobs may be schedules in parallel or sequenced by the command queue.
There are helpers for easily forming IMAP commands from multiple parts - literals, or streams of data. The command processor automatically determines how and when to send commands to the server, determines if they can be pipelined or require waits for literals, calculates data sizes, and so forth.
There are many tunables though, how data is fetched, how many concurrent FETCH calls are invoked, and so forth. Which can affect overall performance significantly.
The IMAPX Store passes most functions to the server as well. It uses a viewsummary to manage the folder lists.
Multiple namespaces are NOT implemented, although they shouldn't be terribly much work to add them.
The CamelIMAPXFolder implements almost everything through calls to the IMAPX Server interfaces, or the folder summary interfaces.
Mail is fetched using partial fetches, but based on byte-offsets, not based on message structure. Generally most servers that support it implement this ok, i hope! All messages are downloaded to a local cache before being used. They are also automatically cached whenever a folder is selected.
Partial message fetches allow the code to
The folder summary object CamelIMAPXSummary subclasses CamelDS.FolderSummaryDisk, and then adds a single extra field to each record - that is a record of the flags the message has on the server. This is used to calculate how to store changes to flags.
The code relies heavily on the sorted nature of the folder summary records to implement simple update algorithms.
Also note that it catches the folder summary changed callback - when they are synced, they are also stored on the server. So folder flag changes are stored soon after they are made, not just when the folder is synced. This means less work needs to be done when it is synced, and failures are less likely to lose data. As an added bonus, the flag update algorithm is better - only changes to flags are stored, the flag set is never set directly. Both of these together with a decent server should allow much smoother interoperability with concurrently running clients.
The IMAPX stream both wraps a remote connection, and provides a lexical analyser for processing the IMAP tokens at a relatively high level.
All tokenising api's are based on a non-copying approach. Instead of returning a wasteful g_strdup'd string which must be freed, pointers to internal buffers are returned directly. This greatly reduces the memory overhead, and simplifies memory management.
Even literal tokens, which may be larger than internal buffers, are returned as segments which were parsed from the internal buffer. The normal stream 'read' process can also be used to read literal content, if a literal is marked in the stream, then it will return EOF when the literal is completed. So literals can be read by just writing to the storage stream.
The utilities provide all manner of interfaces for parsing raw IMAP output into usable structured native camel formats.
ENVELOPE requests are translated into partial CamelMessageInfos, content fetches are converted into CamelStreams, FLAG data is translated into flag bits or CamelFlag lists.
This is the parser layer which sits on the lexical analyser layer of the stream.
IMAPX View Summary
The view summary extends CamelDS.ViewSummary#Camel.ViewSummaryDisk, and adds some IMAP folder specific information to the root views of all folders. This is the UIDVALIDITY, EXISTS and PERMENENTFLAGS status information.
Another important task it performs is storing the raw folder name, so that folders with incorrectly formed names can still be properly resolved.
Many of the internal functions return thrown exceptions rather than taking an exception argument.
This was tried as an experiement mainly, but in some cases leads to nice readable code.
The following diagram shows how this all fits together.
The boundaries are pretty strictly enforced in the down direction. i.e. the stream layer doesn't know about the utility layer, and so forth.
There are a bunch of things left to finish, although the basic code works enough to read and manage mail.
There are a lot of problems with folder_changed events, many of them do not propagate properly, although this needs to be verified. Perhaps there is now some duplication because the FolderSummary emits more events than it used to when the code was being written.
Appending might need some work. At present, there is no distinction between offline mode and online mode appending, it goes through the same mechanism. New messages are queued using a guessed UID, and then that is fixed up when the message reaches the server or the server tells us about it (if it doesn't support UIDPLUS).
Offline/online mode isn't implemented in any way, although it should be relatively transparent anyway - ideally the connection should be able to be dropped at any time, without requiring any special synchronisation.
This is a wonderfully cool bit of complex code, even if i say so myself.
Much of the parsing code was developed for the unfinished 'IMAPP' implementation. But that had an overly complicated driver, which tried to operate without a worker thread. The IMAPX code cleaned this up significantly, reducing the folder code to almost nothing.
Even having said that, there are a lot of problems left to fix ...