About FYP

My FYP hasn't had any progress for a while already. I should say I was kind of unfamiliar with my code after December holiday, so a lot of time now are spent reading code and documentations. My current task is to implement the sleep function in client side. This requires me to look at client code, and turn off wifi card there when server command is sent over. The wifi card can easily be turned off on Linux, so I have to switch to Linux now.

Linux is in fact a very good thing. Although, it doesn't have very powerful tools, its simplicity complement any disadvantage it has. For example, making a system call is just one line of code. Its default packages are so developer friendly that, I can compile code in any way I like easily. Of course, the pre-requisite is I understand the working of the system. 

So basically what I want to do for my FYP is to turn off wifi for certain duration when command is sent over. It's of course a line of code to do that in Linux.
system('rfkill block wifi)')
But before that, in fact I have to do many things. Here are some I need to figure out:

  • send command from server to client
  • simulate client packets when his wifi is turned off
  • store sleep information and time on both server and client
  • switch on and off sleep state on both server and client side
The previous guy who were doing similar algorithm actually left some code I can refer to. But it seems to me that his code is not so reliable. This is because he didn't do off-line packets simulation and he put sleep information inside the client connection structure. Moreover, he didn't discover the PVS algorithm, so I assume he didn't really understand the Quake3 code before doing anything. Though his code is unreliable I still benefit from reading his code in some way. 

For example, Quake3's makefile is huge, and I really have no idea where to add my scan file in. His makefile did help figure out this. And he's also use SV_SendServerCommand function, this conform with my observation. However, he receive command at the wrong place and didn't acknowledge the command thus, will cause overflow hence leading to disconnection. So I correct this by putting command reception inside CL_GetServerCommand. So I have fixed the 1st problem. 

The 3rd, and 4th problem seems easy for me, I should solve those first. Then I should attempt the 2nd. Hopefully, Quake3 code has implemented some function to do off-line packets simulation, so I can make use of it. Otherwise, I would be troublesome to do playerState update on my own. 

It seems that these stuff is pretty easy to do, but that's not true. It really takes me a while to fully understand the Quake3 architecture. Michael Abrash's 'The Big Picture' did help me a lot. And I like what he said, "There are a million ways not to finish a project, but there's only one way to finish: Put your head down and grind it out until it's done." It inspires me when I feel frustrated. 

There still lies a long road to go for my FYP. The goal is to make my algorithm work perfectly without any glitch. By then, I should have fully understand the Quake3 code. One part of Quake3 that pose great interest to me is the renderer part. I am only able to spend a little time looking at the render. Though it uses OpenGl, very few native gl functions are actually exposed. Or maybe, I haven't found the correct place to look at. I will look at that after this project. 


[Koei] 2nd coding test


They gave me a second chance for coding test I did last Saturday at Koei Singapore's office. I did screw up that test which are solely C++ questions.

Now they grant me 3 weeks to make up my C++, and then will give me email, Internet and phone test on C++. Though I m not sure how that can be done, but this new really give me new hope and lighten me up!

Oh Yeah! I have to come up with a plan for this 3 weeks. Write as many C++ programs as possible!


zt (bookofhook) : = The Quake3 Networking Model =

This article is a reprint/updated version of an e-mail I sent out to the mud-dev mailing list a few years ago, describing the Q3 networking model as described to me by John Carmack.  I did this partly out of my own curiousity (since I was firmly entrenched in the graphics side of things) and partly out of a desire to propagate information on the 100% unreliable networking model in Q3 which I felt (and still feel) was fairly groundbreaking due to its simplicity and ease of understanding.

== The First Attempt (QTEST/Quake2) ==

Carmack's first real networking implementation, back in 1995, used TCP for !QuakeTest (QTEST).  This was fine for LAN play, because the large packets (8K) wouldn't fragment on a LAN (by "fragment", I mean to the point where disassembly and reassembly induced significant overhead), but didn't work so well over the Internet due to fragmentation (where dis/reassembly and lost packets often resulted in very expensive resends).  His next iteration involved using UDP with both reliable and unreliable data, pretty much what many would consider a standard networking architecture.  However standard mixed reliabled/unreliable implementations tend to generate very hard to find bugs, e.g. sequencing errors where guaranteed messages referenced entities altered through unreliable messages.

== Quake3 ==

The final iteration (Quake3), which was the first time he really felt he "got it right" (whereas Carmack always felt a little uneasy with previous implementations' fragility), used a radically different approach.  With Quake3 he dropped the notion of a reliable packet altogether, replacing the previous network packet structure with a single packet type -- the client's necessary game state.  The server sends sequenced game state updates that are delta compressed from the last acknowledged game state the client received.  This "just works".  Dropped packets are handled implicitly, and specific commands are never acknowledged independently -- last known state acks are piggy backed on regular incoming/outgoing traffic.

The client's receive logic boils down to:
      if ( newState.sequence < lastState.sequence )
        //discard packet
      else if ( newState.sequence > lastState.sequence )
         lastState = deltaUncompress( lastState, newState );

         ackServer( lastState.sequence );

The client then extrapolates movement and whatever other information it needs based on the last game state it received.

It's even simpler on the server:
      deltaCompressState( client.lastAckState, newState, &compressedState );

      sendToClient( client, compressedState );

The server never sits around waiting for an acknowledgement. As a result, latencies are much lower than if you have code that sits there twiddling its thumbs waiting for a synchronous ACK.

There are two downsides to this implementation. The big one is that it soaks more bandwidth since the server is constantly pumping new state to the client instead of just sending new state when it doesn't get an ACK. Also, the amount of data sent grows with the number of dropped packets since the delta grows as a result.

The other downside is that the server must buffer a lot of data, specifically of the last acked state to the client (so it can delta compress) along with a list of all the previous states it has sent to the client (back to the last acked one). This is necessary so it can rebase its delta comparisons on any of the game states that the client has acked.

For example, let's say the client last acked sequence 14. The server is sending out new game states with incrementing sequences. It may be up to sequence 20 when it gets an ack from the client that it has received sequence 17. The server has to have the state that was sent as part of sequence 17 in order to rebase the delta compression, so if game states are large enough or the buffers are long enough, this can grow fairly high (to the tune of several hundred K per client).

All reliable data that another node needs is sent repeatedly until the sender receives an update for most-recent-ack (indicating that the packet has been received). For example, if a player sends a chat message (reliable) with update 6, he will continually send that chat message on subsequent state updates until he receives notification from the server that it has received an update >= 6.  Brute force, but it works.

== Port Allocation ==

A note about using multiple ports -- there is one significant advantage to using multiple ports, and that's that the OS buffers incoming data per port. So if you have a chance of a port buffer overflow, then multiple ports may not be a bad idea. If a port overflow is highly unlikely to occur, then multiple ports probably just complicate matters.

Speaking of port buffer overflows, John doesn't think this is a problem. People that spin a second thread just to block on incoming ports are probably just doing extra work that doesn't need to be done. Effectively a thread that pumps the data ports is duplicating the exact same work the network stack is doing in the OS. Instead, John just pumps the network stack at the top of the event loop. Goes to show that brute force sometimes "just works". On a listen server (i.e. you're hosting a server on your client), where frame times can be several dozen milliseconds, there is a chance that you'll get a buffer overrun. In that case there are some options in Q3 to minimize buffer sizes, etc. to reduce the likelihood of a buffer overrun.

One nice thing about Q3's networking architecture, however, is that even in the case of a buffer overrun it just keeps working. There is no effective difference to this system between a dropped packet and a buffer overrun; as a matter of fact, this may actually be masking some real buffer overruns, but so far Carmack's pretty sure that he's not even come close to hitting an overrun situation. Of course, the dynamics of a shooter are different than those of an MMOG (fewer clients, much higher bandwidth per client), but it's interesting nonetheless.

== NAT ==

One interesting note on NAT: apparently some older routers will randomly switch the client's port when behind a NAT. This is very rare, but causes lost connections "randomly" for many users. It took forever to track this down, but once it was discovered he solved it by having a client randomly generate a unique client-id (16-bits) at connection time that is appended to every incoming packet. That way multiple clients with the same IP can work just fine, even if their ports get re-assigned mid-session. Humorously enough, he never bothered handling the (so rare as to be insignificant) case where two clients behind the same firewall randomly generate the exact same ID (in the event this occurred one of the clients would be dropped due to badly out of sync state).

== Compression, Encryption, and Packets ==

Aside from application level delta compression, Q3 also does per-packet Huffman compression. Works great, simple to implement, and the upside to something more aggressive is very low.

He has done some half-hearted work on encryption, but basically app/stream-level encryption is pointless because of the sophistication of hackers. In the future he'll probably rely on higher level inspection (a la Punkbusters for Counter-Strike) instead of cute bit-twiddling.

Finally, I asked about packet size. He thinks the notion of a 512-byte optimal MTU is pretty much at least 5 years out of date. He sees almost no problems with a max packet size of 1400 bytes, and fragmentation just isn't an issue. During the course of typical activity, the actual payload size is often smaller than the UDP header size (!), and even
during hectic activity the packet size is on the order of several hundred bytes. The large packets (up to 1.4K) are typically when a large amount of game state must be sent at once, such as at connection time when there is no state to delta against and when there is a lot of startup initialization going on.

== Summary ==

The Q3 networking model obviates the need to even have a discussion about UDP vs. TCP, unreliable vs. reliable, and out-of-order vs. in-order. It's all unreliable UDP delta compressed against the last known state.

A very elegant and effective architecture, albeit one that is ideally suited to a subset of game types.

xor Encryption

Basically it use a key string to obtain cipher text by doing byte-based exclusively OR with original text cyclically. One more round with cipher text will restore the original text.  Very easy to implement, thus widely used.

This program can be cracked by frequency analysis. But that's only limited to alphabetic text. For thing like images and sound, it's hardly cracked.


About Koei Interview

今天去了Koei Interview,感觉不佳,估计是没戏了。不过这里还是总结一下,以供日后学习吧。
Interview 有三个session。


之前在网上搜到的personality test这次没有出现。不过后来的group assessment完全就取代了这个personality test的作用,给应聘者个人素质很好的评测。我们被分成两组,A和B。我在B组。我们首先是自己的group discussion。discussion的内容就是,在地球即将灭亡之际,人类需要移居别的星球,但飞船只能载10个人,而我们要从30个职业里面选取10个来延续人类文明。大家都坐在一个会议室里,Koei的几个staff在我们旁边观察。我们这组很活跃,每个人都有发言。


group assessment结束之后是coding test。三个人面试我,他们看起来应该是程序员。其中一个给我了一份paper,里面有四道C++的题目。我需要在10分钟内完成。第一次在三个人的监督下在纸上做coding题,我是相当的紧张的,不过还是很快镇定下来了。第一题是考macro的,第二题是问为什么给出的program会exit。第三题是问一个function的return value。第四题是关于object的memory management。前两题很简单,第三题简单的有点不确定,第四题把我卡住了。考官还提示我跟copy constructor有关系,但我还是没有答出来。看他们的样子是对我有些失望了。我也觉得我的C++真的还很初级。然后他们问了我一些跟我resume有关的东西。其实也都无关紧要了,我感觉的到我是没戏了。

唉,出来之后,是有点沮丧。不过,写这篇总结的时候已经好些了。慢慢来吧,我对C++失去信心,那我今后就要多写些C++程序。其实我也是觉得之前game development的那些code还是很不够的。也许这学期的project会写很多C++ code吧。


Koei Recruitment Test

昨天收到了Koei Singapore 发给我的邮件, 让我8号去参加笔试。