登录 注册
当前位置:主页 > 资源下载 > 10 > BobBuilder_app下载

BobBuilder_app下载

  • 更新:2024-07-20 08:14:32
  • 大小:431KB
  • 推荐:★★★★★
  • 来源:网友上传分享
  • 类别:VB - 后端
  • 格式:ZIP

资源介绍

Twitter Digg Facebook Del.icio.us Reddit Stumbleupon Newsvine Technorati Mr. Wong Yahoo! Google Windows Live Send as Email Add to your CodeProject bookmarks Discuss this article 85 Print Article Database » Database » Other databasesLicence CPOL First Posted 19 Jan 2012 Views 24,219 Downloads 992 Bookmarked 74 times RaptorDB - The Key Value Store V2 By Mehdi Gholam | 8 Mar 2012 | Unedited contribution C#.NETDBABeginnerIntermediateAdvanceddatabase Even faster Key/Value store nosql embedded database engine utilizing the new MGIndex data structure with MurMur2 Hashing and WAH Bitmap indexes for duplicates. See Also More like this More by this author Article Browse Code Stats Revisions (8) Alternatives 4.95 (56 votes) 1 2 3 4 5 4.95/5 - 56 votes μ 4.95, σa 1.05 [?] Is your email address OK? You are signed up for our newsletters but your email address is either unconfirmed, or has not been reconfirmed in a long time. Please click here to have a confirmation email sent so we can confirm your email address and start sending you newsletters again. Alternatively, you can update your subscriptions. Add your own alternative version Introduction What is RaptorDB? Features Why another data structure? The problem with a b+tree Requirements of a good index structure The MGIndex Page Splits Interesting side effects of MGIndex The road not taken / the road taken and doubled back! Performance Tests Comparing B+tree and MGIndex Really big data sets! Index parameter tuning Performance Tests - v2.3 Using the Code Differences to v1 Using RaptorDBString and RaptorDBGuid Global parameters RaptorDB interface Non-clean shutdowns Removing Keys Unit tests File Formats File Format : *.mgdat File Format : *.mgbmp File Format : *.mgidx File Format : *.mgbmr , *.mgrec History Download RaptorDB_v2.0.zip - 38.7 KB Download RaptorDB_v2.1.zip - 39 KB Download RaptorDB_v2.2.zip - 39 KB Download RaptorDB_v2.3.zip - 39.6 KB Download RaptorDB_v2.4.zip - 39.9 KB Introduction This article is the version 2 of my previous article found here (http://www.codeproject.com/Articles/190504/RaptorDB), I had to write a new article because in this version I completely redesigned and re-architected the original and so it would not go with the previous article. In this version I have done away with the b+tree and hash index in favor of my own MGIndex structure which for all intents and purposes is superior and the performance numbers speak for themselves. What is RaptorDB? Here is a brief overview of all the terms used to describe RaptorDB: Embedded: You can use RaptorDB inside your application as you would any other DLL, and you don't need to install services or run external programs. NoSQL: A grass roots movement to replace relational databases with more relevant and specialized storage systems to the application in question. These systems are usually designed for performance. Persisted: Any changes made are stored on hard disk, so you never lose data on power outages or crashes. Dictionary: A key/value storage system much like the implementation in .NET. MurMurHash: A non cryptographic hash function created by Austin Appleby in 2008 (http://en.wikipedia.org/wiki/MurmurHash). Features RaptorDB has the following features : Very fast performance (typically 2x the insert and 4x the read performance of RaptorDB v1) Extremely small foot print at ~50kb. No dependencies. Multi-Threaded support for read and writes. Data pages are separate from the main tree structure, so can be freed from memory if needed, and loaded on demand. Automatic index file recovery on non-clean shutdowns. String Keys are UTF8 encoded and limited to 60 bytes if not specified otherwise (maximum is 255 chars). Support for long string Keys with the RaptorDBString class. Duplicate keys are stored as a WAH Bitmap Index for optimal storage and speed in access. Two mode of operation Flush immediate and Deferred ( the latter being faster at the expense of the risk of non-clean shutdown data loss). Enumerate the index is supported. Enumerate the Storage file is supported. Remove Key is supported. Why another data structure? There is always room for improvement, and the ever need for faster systems compels us to create new methods of doing things. MGindex is no exception to this rule. Currently MGindex outperforms b+tree by a factor of 15x on writes and 21x on reads, while keeping the main feature of disk friendliness of a b+tree structure. The problem with a b+tree Theoretically a b+tree is O(N log k N) or log base k of N, now for the typical values of k which are above 200 for example the b+tree should outperform any binary tree because it will use less operations. However I have found the following problems which hinder performance : Pages in a b+tree are usually implemented as a list or array of child pointers and so while finding and inserting a value is a O(log k) operation the process actually has to move children around in the array or list, and so is time consuming. Splitting a page in b+tree has to fix parent nodes and children so effectively will lock the tree for the duration, so parallel updates are very very difficult and have spawned a lot of research articles. Requirements of a good index structure So what makes a good index structure, here are what I consider essential features of one: Page-able data structure: Easy loading and saving to disk. Free memory on memory constraints. On-demand loading for optimal memory usage. Very fast insert and retrieve. Multi-thread-able and parallel-able usage. Pages should be linked together so you can do range queries by going to the next page easily. The MGIndex MGIndex takes the best features of a b+tree and improves upon on them at the same time removing the impediments. MGIndex is also extremely simple in design as the following diagram shows: As you can see the page list is a sorted dictionary of first keys from each page along with associated page number and page items count. A page is a dictionary of key and record number pairs. This format ensures a semi sorted key list, in that within a page the data is not sorted but pages are in sort order relative to each other. So a look-up for a key just compares the first keys in the page list to find the page required and gets the key from the page's dictionary. MGIndex is O(log M)+O(1), M being N / PageItemCount [PageItemCount = 10000 in the Globals class]. This means that you do a binary search in the page list in log M time and get the value in O(1) time within a page. RaptorDB starts off by loading the page list and it is good to go from there and pages are loaded on demand, based on usage. Page Splits In the event of page getting full and reaching the PageItemCount, MGIndex will sort the keys in the page's dictionary and split the data in two pages ( similar to a b+tree split) and update the page list by adding the new page and changing the first keys needed. This will ensure the sorted page progression. Interestingly the processor architecture plays an important role here as you can see in the performance tests as it is directly related to the sorting key time, the Core iX processors seem to be very good in this regard. Interesting side effects of MGIndex Here are some interesting side effects of MGIndex Because the data pages are separate from the Page List structure, implementing locking is easy and isolated within a page and not the whole index, not so for normal trees. Splitting a page when full is simple and does not require a tree traversal for node overflow checking as in a b+tree. Main page list updates are infrequent and hence the locking of the main page list structure does not impact performance. The above make the MGIndex a really good candidate for parallel updates. The road not taken / the road taken and doubled back! Originally I used a AATree found here (http://demakov.com/snippets/aatree.html) for the page structures, for being extremely good and simple structure to understand. After testing and comparing to the internal .net SortedDictionary (which is a Red-Black tree structure) it was slower and so scrapped (see the performance comparisons). I decided against using SortedDictionary for the pages as it was slower than a normal Dictionary and for the purpose of a key value store the sorted-ness was not need and could be handled in other ways. You can switch to the SortedDictionary in the code at any time if you wish and it makes no difference to the overall code other than you can remove the sorting in the page splits. I also tried an assorted number of sorting routines like double pivot quick sort, timsort, insertion sort and found that they all were slower than the internal .net quicksort routine in my tests. Performance Tests In this version I have compiled a list of computers which I have tested on and below is the results. As you can see you get a very noticeable performance boost with the new Intel Core iX processors. Comparing B+tree and MGIndex For a measure of relative performance of a b+tree, Red/Black tree and MGIndex I have compiled the following results. Times are in seconds. B+Tree : is the index code from RaptorDB v1 SortedDictionary : is the internal .net implementation which is said to be a Red/Black tree. Really big data sets! To really put the engine under pressure I did the following tests on huge data sets (times are in seconds, memory is in Gb) : These tests were done on a HP ML120G6 system with 12Gb Ram, 10k raid disk drives running Windows 2008 Server R2 64 bit. For a measure of relative performance to RaptorDb v1 I have included a 20 million test with that engine also. I deferred from testing the get test over 100 million record as it would require a huge array in memory to store the Guid keys for finding later, that is why there is a NT (not tested) in the table. Interestingly the read performance is relatively linear. Index parameter tuning To get the most out of RaptorDB you can tune some parameters specific to your hardware. PageItemCount : controls the size of each page. Here are some of my results: I have chosen the 10000 number as a good case in both read and writes, you are welcome to tinker with this on your own systems and see what works better for you. Performance Tests v2.3 In v2.3 a single simple change of converting internal classes to structs rendered huge performance improvements of 2x+ and at least 30% lower memory usage. You are pretty much guaranteed to get 100k+ insert performance on any system. Some of the test above were run 3 times because the computers were being used at the time (not cold booted for the tests) so the initial results were off. The HP G4 laptop is just astonishing. I also re-ran the 100 million test on the last server in the above list and here is the results: As you can see in the above test, the insert time is 4x faster (although the computer specs to not match the HP system tested earlier) and incredibly the memory usage is half than the previous test. Using the Code To create or open a database you use the following code : Collapse | Copy Code // to create a db for guid keys without allowing duplicates var guiddb = RaptorDB.RaptorDB.Open("c:\\RaptorDbTest\\multithread", false); // to create a db for string keys with a length of 100 characters (UTF8) allowing duplicates var strdb = RaptorDB.RaptorDB.Open("c:\\intdb", 100, true); To insert and retrieve data you use the following code : Collapse | Copy Code Guid g = Guid.NewGuid(); guiddb.Set(g, "somevalue"); string outstr=""; if(guiddb.Get(g, out outstr)) { // success outstr should be "somevalue" } The UnitTests project contains working example codes for different use cases so you can refer to it for more samples. Differences to v1 The following are a list of differences in v2 opposed to v1 of RaptorDB: Log Files have been removed and are not needed anymore as the MGIndex is fast enough for in-process indexing. Threads have been replaced by timers. The index will be saved to disk in the background without blocking the engine process. Messy generic code has been simplified and the need for a RDBDataType has been removed, you can use normal int, long, string and Guid data types. RemoveKey has been added. Other than that existing code should compile as is with the new engine. Using RaptorDBString and RaptorDBGuid RaptorDBString is for long string keys (larger than 255 characters) and it is really useful for file paths etc. You can use it in the following way : Collapse | Copy Code // long string keys without case sensitivity var rap = new RaptorDBString(@"c:\raptordbtest\longstringkey", false); // murmur hashed guid keys var db = new RaptorDBGuid("c:\\RaptorDbTest\\hashedguid"); RaptorDBGuid is a special engine which will MurMur2 hash the input Guid for lower memory usage (4 bytes opposed to 16 bytes), this is useful if you have a huge number of items which you need to store. You can use it in the following way : Collapse | Copy Code // murmur hashed guid keys var db = new RaptorDBGuid("c:\\RaptorDbTest\\hashedguid"); Global parameters The following parameters are in the Global.cs file which you can change which control the inner workings of the engine. Parameter Default Description BitmapOffsetSwitchOverCount 10 Switch over point where duplicates are stored as a WAH bitmap opposed to a list of record numbers PageItemCount 10,000 The number of items within a page SaveTimerSeconds 60 Background save index timer seconds ( e.g. save the index to disk every 60 seconds) DefaultStringKeySize 60 Default string key size in bytes (stored as UTF8) FlushStorageFileImmetiatley false Flush to storage file immediately FreeBitmapMemoryOnSave false Compress and free bitmap index memory on saves RaptorDB interface Set(T, byte[]) Set Key and byte array Value, returns void Set(T, string) Set Key and string Value, returns void Get(T, out string) Get the Key and put it in the string output parameter, returns true if key was found Get(T, out byte[]) Get the Key and put it in the byte array output parameter, returns true if key was found RemoveKey(T) This will remove the key from the index EnumerateStorageFile() returns all the contents of the main storage file as an IEnumerable< KeyValuePair > Enumerate(fromkey) Enumerate the Index from the key given. GetDuplicates(T) returns a list of main storage file record numbers as an IEnumerable of the duplicate key specified FetchRecord(int) returns the Value from the main storage file as byte[], used with GetDuplicates and Enumerate Count(includeDuplicates) returns the number of items in the database index , counting the duplicates also if specified SaveIndex() Allows the immediate save to disk of the index (the engine will automatically save in the background on a timer) Shutdown() This will close all files and stop the engine. Non-clean shutdowns In the event of a non clean shutdown RaptorDB will automatically rebuild the index from the last indexed item to the last inserted item in the storage file. This feature also enables you to delete the mgidx file and have RaptorDB rebuild the index from scratch. Removing Keys In v2 of RaptorDB removing keys has been added with the following caveats : Data is not deleted from the storage file. A special delete record is added to the storage file for tracking deletes and which also help with index rebuilding when needed. Data is removed from the index. Unit Tests The following unit tests are included in the source code (the output folder for all the tests is C:\RaptorDbTest ): Duplicates_Set_and_Get : This test will generate 100 duplicates of 1000 Guids and fetch each one (This tests the WAH bitmap subsystem). Enumerate : This test will generate 100,001 Guids and enumerate the index from a predetermined Guid and show the result count (the count will differ between runs). Multithread_test : This test will create 2 threads inserting 1,000,000 items and a third thread reading 2,000,000 items with a delay of 5 seconds from the start of insert. One_Million_Set_Get : This test will insert 1,000,000 items and read 1,000,000 items. One_Million_Set_Shutdown_Get : This test will do the above but shutdown and restart before reading. RaptorDBString_test : This test will create 100,000 1kb string keys and read them from the index. Ten_Million_Optimized_GUID : This test will use the RaptorDBGuid class which will MurMur hash 10,000,000 Guids writting and reading them. Ten_Million_Set_Get : The same as 1 million test but with 10 million items. Twenty_Million_Optimized_GUID : The same as 10 million test but with 20 million items. Twenty_Million_Set_Get : The same as 1 million test but with 20 million items. StringKeyTest : A test for normal string keys of max 255 length. RemoveKeyTest : A test for removing keys works properly between shutdowns. File Formats File Format : *.mgdat Values are stored in the following structure on disk: File Format : *.mgbmp Bitmap indexes are stored in the following format on disk : The bitmap row is variable in length and will be reused if the new data fits in the record size on disk, if not another record will be created. For this reason a periodic index compaction might be needed to remove unused records left from previous updates. File Format : *.mgidx The MGIndex index is saved in the following format as shown below: File Format : *.mgbmr , *.mgrec Rec file is a series of long values written to disk with no special formatting. These values map the record number to an offset in the BITMAP index file and DOCS storage file. History Initial Release v2.0 : 19th January 2012 Update v2.1 : 26th January 2012 lock on safedictionary iterator set, Thanks to igalk474 string default(T) -> "" instead of null, Thanks to Ole Thrane for finding it mgindex string firstkey null fix added test for normal string keys fixed the link to the v1 article Update v2.2 : 8th February 2012 bug fix removekey, Thanks to syro_pro removed un-needed initialization in safedictionary, Thanks to Paulo Zemek Update v2.3 : 1st March 2012 changed internal classes to structs (2x+ speed, 30% less memory) added keystore class and code refactoring added a v2.3 performance section to the article Update v2.4 : 7th March 2012 bug fix remove key set page isDirty -> Thanks to Martin van der Geer Page is a class again to fix keeping it's state added RemoveKeyTest unit test removed MemoryStream from StorageFile.CreateRowHeader for speed current record number is also set in the bitmap index for duplicates License This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL) About the Author Mehdi Gholam Chief Technology Officer United Kingdom Member Mehdi first started programming when he was 8 on BBC+128k machine in 6512 processor language, after various hardware and software changes he eventually came across .net and c# which he has been using since v1.0. He is formally educated as a system analyst Industrial engineer, but his programming passion continues. Article Top Rate this: Poor Excellent Add a reason or comment to your vote: x Votes of 3 or less require a comment Comments and Discussions New Message Search this forum Profile popups Noise Very HighHighMediumLowVery Low Layout NormalExpand Posts onlyExpand Posts & RepliesThread ViewNo JavascriptNo JS + Preview Per page 102550 Refresh First PrevNext 2.3 numbers dave.dolan 8:52 2 Mar '12 That's a huge difference. Of course the allocation profile for the objects is going to be totally different now, so much more will happen stack-side instead of in the heap(s).. have you cracked open a profiler on it to see which operations are the heaviest? (I have a perf profiler, but not a good memory profiler, or I'd try it myself.) also: If you don't mind my saying so a little more forcefully, you should add that range query code, doesn't have to be the stuff I sent you, but the main benefit of an index is range queries, and so far you do not yet expose this ability. Since you have the key markers on each block it's easy to know in which block will be necessary to 'check' for end of range, in all the rest you can simply enumerate. I'd be happy to help with this part. Reply·Email·View Thread·Permalink·Bookmark 5.00/5 (1 vote) Rate this message: 1 2 3 4 5 Re: 2.3 numbers Mehdi Gholam 8:23 4 Mar '12 Unfortunately, I don't have a memory profiler either, I might try the redgate profiler when I get the chance. Range queries are on my list, I promise! I'm a bit busy with the doc version at the moment (which has to use range queries), I will post back code to this KV version as soon as I can. Its the man, not the machine - Chuck Yeager If at first you don't succeed... get a better publicist If the final destination is death, then we should enjoy every second of the journey. Reply·Email·View Thread·Permalink·Bookmark Rate this message: 1 2 3 4 5 Page read error header invalid, number = 1 danlobo 12:56 25 Feb '12 Mehdi, Correct me if I'm wrong, but after starting the RaptorBD engine the first time, my first Get call throws this exception. A suggestion to correct this is to change IndexFile.cs, line 290 from SeekPage(number); to this SeekPage(number - 1); This will make the index to look into first page saved in disk. I believe the current version always looks into the second page, where at first moment doesn't exists, causing this error. I believe the first page in file is never being touched, this probably will correct this too. Regards Reply·Email·View Thread·Permalink·Bookmark Rate this message: 1 2 3 4 5 Re: Page read error header invalid, number = 1 Mehdi Gholam 4:50 26 Feb '12 Pages start from 1, 0 is the page list page. I will look into this issue. Its the man, not the machine - Chuck Yeager If at first you don't succeed... get a better publicist If the final destination is death, then we should enjoy every second of the journey. Reply·Email·View Thread·Permalink·Bookmark Rate this message: 1 2 3 4 5 Re: Page read error header invalid, number = 1 danlobo 9:40 26 Feb '12 Oops. My bad. Overlooked this part. Reply·Email·View Thread·Permalink·Bookmark Rate this message: 1 2 3 4 5 Nuget gregbacchus 16:50 19 Feb '12 RaptorDB looks to be great. Just what I am looking for. Are you considering making it available on Nuget? Reply·Email·View Thread·Permalink·Bookmark Rate this message: 1 2 3 4 5 Re: Nuget Mehdi Gholam 2:30 20 Feb '12 Thanks Greg! I will put it on nuget as soon as I can. Its the man, not the machine - Chuck Yeager If at first you don't succeed... get a better publicist If the final destination is death, then we should enjoy every second of the journey. Reply·Email·View Thread·Permalink·Bookmark 5.00/5 (1 vote) Rate this message: 1 2 3 4 5 Great article CIDev 16:29 13 Feb '12 A very well written and illustrated article that presents a very useful storage tool. Five from me. Just because the code works, it doesn't mean that it is good code. Reply·Email·View Thread·Permalink·Bookmark 5.00/5 (1 vote) Rate this message: 1 2 3 4 5 Re: Great article Mehdi Gholam 2:06 14 Feb '12 Thanks Its the man, not the machine - Chuck Yeager If at first you don't succeed... get a better publicist If the final destination is death, then we should enjoy every second of the journey. Reply·Email·View Thread·Permalink·Bookmark Rate this message: 1 2 3 4 5 Some perf info torial 14:14 9 Feb '12 For a dual quad-core E5420 @2.5GHz Windows Server 2008 16 GB RAM 15K SAS Drive I got for the 10,000,000 writes test: Page Count = 1208 Total Split Time = 6.69499999... set time = 75.489 get time = 98.648 This is for version 2.2. Reply·Email·View Thread·Permalink·Bookmark Rate this message: 1 2 3 4 5 Re: Some perf info Mehdi Gholam 7:16 10 Feb '12 I would have expected more from your hardware. Are you running via nunit or the console application? Try running the console app as it is 64 bit and faster. Its the man, not the machine - Chuck Yeager If at first you don't succeed... get a better publicist If the final destination is death, then we should enjoy every second of the journey. Reply·Email·View Thread·Permalink·Bookmark Rate this message: 1 2 3 4 5 Re: Some perf info torial 14:54 10 Feb '12 I was using just the console app -- but it is 32-bit OS (I plan to rebuild the box at some point, but it won't be happen soon). Perhaps the lower than expected performance is the result of being 32-bit? I reran the tests w/o apps running and got slightly better results (69 set time, 83 get time, split time: 6.95999 and Page Count 1278). Reply·Email·View Thread·Permalink·Bookmark Rate this message: 1 2 3 4 5 Nice peace of work, but i got an null pointer exception at RemoveKey syro_pro 11:00 8 Feb '12 First nice piece of work. When I call the method "RemoveKey", i got a Null Reference Exception: System.NullReferenceException was unhandled Message=Object reference not set to an instance of an object. Source=RaptorDB StackTrace: at RaptorDB.StorageFile`1.WriteData(T key, Byte[] data, Boolean deleted) in ...\RaptorDB_v2.1\RaptorDB\Storage\StorageFile.cs:line 136 at RaptorDB.RaptorDB`1.RemoveKey(T key) in ...\RaptorDB_v2.1\RaptorDB\RaptorDB.cs:line 308 byte[] hdr = CreateRowHeader(kl, data.Length); data is an array of bytes, this parameter is set by the RemoveKey to null. Reply·Email·View Thread·Permalink·Bookmark 5.00/5 (1 vote) Rate this message: 1 2 3 4 5 Re: Nice peace of work, but i got an null pointer exception at RemoveKey Mehdi Gholam 11:26 8 Feb '12 Thanks I will look into it and update soon. Its the man, not the machine - Chuck Yeager If at first you don't succeed... get a better publicist If the final destination is death, then we should enjoy every second of the journey. Reply·Email·View Thread·Permalink·Bookmark Rate this message: 1 2 3 4 5 SafeDictionary Paulo Zemek 10:01 1 Feb '12 I was just looking at SafeDictionary code, and there is a small useless overhead. When you declare the _Dictionary variable you initialize it, and then in the constructors you initialize it again. Considering that the constructors may pass parameters to it, it should not be initialized during declaration. Reply·Email·View Thread·Permalink·Bookmark 5.00/5 (1 vote) Rate this message: 1 2 3 4 5 Re: SafeDictionary Mehdi Gholam 10:06 1 Feb '12 Ah, your right, legacy changes, I will fix it in the next iteration. Thanks Paulo. Its the man, not the machine - Chuck Yeager If at first you don't succeed... get a better publicist If the final destination is death, then we should enjoy every second of the journey. Reply·Email·View Thread·Permalink·Bookmark Rate this message: 1 2 3 4 5 Raptor DB vs Redis gokul78 16:07 30 Jan '12 Do you have any benchmarks which compare Raptor DB with Redis? Reply·Email·View Thread·Permalink·Bookmark 5.00/5 (2 votes) Rate this message: 1 2 3 4 5 Re: Raptor DB vs Redis Mehdi Gholam 1:30 31 Jan '12 I haven't worked with redis, but a quick search reveals the following : The test was done with 50 simultaneous clients performing 100000 requests. The value SET and GET is a 256 bytes string. The Linux box is running Linux 2.6, it's Xeon X3320 2.5 GHz. Text executed using the loopback interface (127.0.0.1). Results: about 110000 SETs per second, about 81000 GETs per second.This is a similar hardware to my test of 10 million [this is a 5 million insert test] From the following link: http://redis.io/topics/benchmarks[^] Its the man, not the machine - Chuck Yeager If at first you don't succeed... get a better publicist If the final destination is death, then we should enjoy every second of the journey. Reply·Email·View Thread·Permalink·Bookmark Rate this message: 1 2 3 4 5 Re: Raptor DB vs Redis dave.dolan 0:17 1 Feb '12 Not really apples to apples here... This is an ordered key/value store and it's meant to take adavantage of the disk, including range queries. For now a huge limitation of Redis is that all keys MUST fit into RAM... raptor doesn't have this limitation, in fact, it's built on a page-able data structure for the sole purpose of NOT having to keep it all in RAM in order to work with it. Redis just happens to have a sort of bolted on feature that supports the disk storage, but it's meant to mostly work in RAM. Also, the keys in Redis are not in 'storage optimized' form, they are in 'query optimized form' which makes them larger than they'd be. If you have a tens of millions or indeed billions of items then Redis isn't going to work well. They do have a virtual memory option which is deprecated and scheduled to be removed that allows it to spill onto disk, but it's buggy and not slated to be fixed. I think they're also coming out with a cluster feature set soon, but alas for now no such luck! (And since I know some of you are saying "Redis can be sharded in client code!" I'll point out that yes, it can, but it's 'manually' done, which means you could do the same with RaptorDb and thus it's a non-sequitor.) Reply·Email·View Thread·Permalink·Bookmark 5.00/5 (1 vote) Rate this message: 1 2 3 4 5 My vote of 5 taony 8:26 30 Jan '12 I road your article since v1.6. At that time, I saw your RaptorDB had a limit of deleting. And now, the V2 support this. You are great! Please also take a view of freeMDB: http://code.google.com/p/freemdb/ Reply·Email·View Thread·Permalink·Bookmark Rate this message: 1 2 3 4 5 My vote of 5 Halil ibrahim Kalkan 4:59 29 Jan '12 congratulations Reply·Email·View Thread·Permalink·Bookmark 5.00/5 (1 vote) Rate this message: 1 2 3 4 5 Re: My vote of 5 Mehdi Gholam 5:22 29 Jan '12 Thanks Halil! Its the man, not the machine - Chuck Yeager If at first you don't succeed... get a better publicist If the final destination is death, then we should enjoy every second of the journey. Reply·Email·View Thread·Permalink·Bookmark Rate this message: 1 2 3 4 5 Re: My vote of 5 Mehdi Gholam 6:00 29 Jan '12 Try using RaptorDB and fastJSON for your DotNetMQ, for extra speed and throughput! Its the man, not the machine - Chuck Yeager If at first you don't succeed... get a better publicist If the final destination is death, then we should enjoy every second of the journey. Reply·Email·View Thread·Permalink·Bookmark 5.00/5 (1 vote) Rate this message: 1 2 3 4 5 Re: My vote of 5 Halil ibrahim Kalkan 7:47 30 Jan '12 I thought so (especially for RaptorDB), thanks a lot Reply·Email·View Thread·Permalink·Bookmark Rate this message: 1 2 3 4 5 performance Unruled Boy 16:12 27 Jan '12 On my SSD(Micron M4 128G) + i5 + 6GB Page Count = 1198, Total Split Time = 5.60031889999999 set time = 74.7922778 get time = 57.2912769 Sounds not as fast as I expected? 2012-01-28 08:06:01|DEBUG|1|RaptorDB.RaptorDB`1|| Current Count = 0 2012-01-28 08:06:01|DEBUG|1|RaptorDB.RaptorDB`1|| Checking Index state... 2012-01-28 08:06:01|DEBUG|1|RaptorDB.RaptorDB`1|| Starting save timer 2012-01-28 08:08:14|DEBUG|1|RaptorDB.RaptorDB`1|| Shutting down 2012-01-28 08:08:14|DEBUG|1|RaptorDB.RaptorDB`1|| saving to disk 2012-01-28 08:08:14|DEBUG|1|RaptorDB.MGIndex`1|| Total split time (s) = 5.60031889999999 2012-01-28 08:08:14|DEBUG|1|RaptorDB.MGIndex`1|| Total pages = 1198 2012-01-28 08:08:19|DEBUG|1|RaptorDB.RaptorDB`1|| index saved 2012-01-28 08:08:19|DEBUG|1|RaptorDB.IndexFile`1|| Shutdown IndexFile 2012-01-28 08:08:19|DEBUG|1|RaptorDB.BitmapIndex|| Shutdown BitmapIndex 2012-01-28 08:08:19|DEBUG|1|RaptorDB.RaptorDB`1|| Shutting down log Regards, unruledboy_at_gmail_dot_com http://www.xnlab.com Reply·Email·View Thread·Permalink·Bookmark Rate this message: 1 2 3 4 5 Last Visit: 2:39 9 Mar '12 Last Update: 2:38 9 Mar '12 1234 Next » General News Suggestion Question Bug Answer Joke Rant Admin Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages. Permalink | Advertise | Privacy | Mobile Web02 | 2.5.120308.1 | Last Updated 8 Mar 2012 Article Copyright 2012 by Mehdi Gholam Everything else Copyright © CodeProject, 1999-2012 Terms of Use Related Articles RaptorDB - the Key Value Store hOOt - full text search engine Temporary Tables vs. Table Variables and Their Effect on SQL Server Performance A Vector Type for C# Auto Value Cinch V2: Version 2 of my Cinch MVVM framework: Part 3 of n Image Alignment Algorithms Step by Step Guide to Delicious OAuth API FireBird SqlHelper - A Data Access Application Block for FireBird Object Oriented JavaScript Class Library in C#/.NET Style OrderedDictionary: A generic implementation of IOrderedDictionary Optimizing Serialization in .NET Scoped Context Store Implementing Digital Signing in .NET CinchV2 :Version 2 of my Cinch MVVM framework: Part 4 of n KeePass Password Safe CinchV2: Version2 of my Cinch MVVM framework: Part 6 of n RSA Interoperability between JavaScript and RSACryptoServiceProvider - Form Login Example Registry Wrapper Class (CRegistry) How to read and write an INI File The Daily Insider 30 free programming books Daily News: Signup now.