18. Distributed processing part 1: ORM, RPC, login

Game services use several servers to handle large volumes of requests. Therefore, designing and implementing expandable game services requires efficient communication and sub-distribution functions between servers so it appears as if a single server were operating even though the game service comprises several servers. It normally takes a lot of time and effort to implement this efficiently. However, iFun Engine provides a powerful distributed processing feature with only simple configuration required.

18.1. Using ORM in a distribution environment

ORM Part 1: Overview explained how iFun Engine’s ORM automatically handles DB tasks without any other DB handling required. But what if the same object is accessed from multiple servers? In some cases, the same game object needs to be accessed from several servers; for instance, if a game user gives an inventory item as a gift to a friend.

There are simple methods using DB synchronization points, but these can only become DB bottlenecks. This means each server can handle fewer concurrent connections.

Therefore, the most efficient method is to coordinate objects between game servers so there is no need to access the DB. However, this requires complicated implementation to prevent deadlock between servers accessing objects and RPC for this purpose. iFun Engine offers these features in a simplified manner by using a distribution function through Distribution parameters.

There is no need to modify the ORM. For example, if fetch is invoked and the target object is already loaded from the DB into the peer’s cache, ORM borrows the object by RPC messaging the other server and automatically performs the returned task.

Important

When using ORM, all servers need to use the same object model definitions. All servers also need to connect to the same DB server.

Servers must also use the same app_id in the MANIFEST.json AppInfo session as follows to include servers in the same group. This app ID is not the client’s app ID, but an ID to distinguish the server group, so you can set text strings as you like to share within the server group.

{
  ...
  "AppInfo": {
    "app_id": "my_server_app_id_shared_among_all_the_servers"
  }
  ...
}

18.2. Distribution server management

18.2.1. Distribution tagging

It is often necessary to differentiate server groups for particular purposes. For example, you may want to differentiate a server group handling the lobby from the server group handling the room in a room-lobby game, or set up particular servers to only handle beginner dungeons.

iFun Engine provides tags for RPC server units to simplify these cases. Tags are like nicknames used for convenience to differentiate servers, and the programmer can decide which tags to attach to which servers and which tags have which meanings. You can add tags by invoking them in the code as in the example below or listing them in Distribution parameters’s rpc_tags as follows.

A server can have more than one tag and multiple servers can share the same tag.

In the example below, Server1 and Server2 are in the lobby server group and Server1 has the master role. Both servers share the “lobby” tag for this purpose, while Server1 has an additional “master” tag.

Server1 code

Rpc::AddTag("lobby");
Rpc::AddTag("master");
Rpc.AddTag ("lobby");
Rpc.AddTag ("master");

Server2 code

Rpc::AddTag("lobby");
Rpc.AddTag ("lobby");

Now, when a lobby server list is searched for by another server that needs it, Server1 and Server2 are returned. When the “master” tag is searched, Server1 is returned.

Server3 code

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
Rpc::PeerMap peers;
Rpc::GetPeersWithTag(&peers, "lobby");

Rpc::PeerMap masters;
Rpc::GetPeersWithTag(&masters, "master");

// master 라는 태그가 다른 목적으로도 사용될 수 있어서,
// 명시적으로 lobby 태그의 서버들 중에서 master 를 찾고 싶다면 다음처럼 할 수 있습니다.
for (Rpc::PeerMap::iterator it = peers.begin(); it != peers.end(); ++it) {
  Rpc::Tags tags;
  GetPeerTags(&tags, it->first);
  if (tags.find("master") != tags.end()) {
    // Found.
  }
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
Dictionary<Guid, System.Net.IPEndPoint> peers;
Rpc.GetPeersWithTag(out peers, "lobby");

Dictionary<Guid, System.Net.IPEndPoint> masters;
Rpc.GetPeersWithTag(out masters, "master");

// master 라는 태그가 다른 목적으로도 사용될 수 있어서,
// 명시적으로 lobby 태그의 서버들 중에서 master 를 찾고 싶다면 다음처럼 할 수 있습니다.
foreach (var pair in peers)
{
  SortedSet<string> tags;
  Rpc.GetPeerTags(out tags, pair.Key);

  if (tags.Contains ("master"))
  {
    // Found.
  }
}

18.2.2. Exporting server lists

18.2.2.1. Rpc::GetPeers(): Exports all server lists

static size_t Rpc::GetPeers(Rpc::PeerMap *ret, bool include_self=false)
public static UInt64 Rpc.GetPeers (out Dictionary<Guid, PeerEndpoint> ret, bool include_self = false)

18.2.2.2. Rpc::GetPeersWithTag(): Exports server lists with particular tags

static size_t GetPeersWithTag(Rpc::PeerMap *ret, const Tag &tag, bool include_self=false)
public static UInt64 Rpc.GetPeersWithTag (out Dictionary<Guid, PeerEndpoint> ret, Rpc.Tag tag, bool include_self = false)

18.2.3. Peer servers’ public IPs

Detecting Server IP Addresses covered how to get local servers’ public IPs and introduced HardwareInfo::GetExternalIp() and HardwareInfo::GetExternalPorts().

Similarly, Rpc::GetPeerExternalIp() and Rpc::GetPeerExternalPorts() are provided to get peers’ IPs and ports.

static boost::asio::ip::address Rpc::GetPeerExternalIp(const Rpc::PeerId &peer)
public static System.Net.IPAddress Rpc.GetPeerExternalIp (Rpc.PeerId peer)
static HardwareInfo::ProtocolPortMap Rpc::GetPeerExternalPorts (const Rpc::PeerId &peer)
public static Dictionary<HardwareInfo.FunapiProtocol, ushort> Rpc.GetPeerExternalPorts (Rpc.PeerId peer)

18.2.4. Sharing status/data between servers

It may be necessary to share server status data between servers in some cases. For example, load balancing between game servers naturally requires that each server’s number of concurrent accesses be known. Likewise, game service monitoring tools need to know the status of all servers.

iFun Engine provides an easy way to share server status. You can set server status or data with the Rpc::SetStatus() function.

static void Rpc::SetStatus(const Json &status);
public static void Rpc.SetStatus (JObject status)

You can get set statuses from peers with Rpc::GetPeerStatus().

static Json Rpc::GetPeerStatus(const Rpc::PeerId &peer);
public static JObject Rpc.GetPeerStatus (Rpc.PeerId peer)

Tip

When Rpc::SetStatus() is invoked, it is immediately sent to other servers. Therefore, when frequently updated data (concurrent users, number of rooms, etc.) changes, rather than calling Rpc::SetStatus() each time, it is better to use Timer to update periodically.

Example: Sharing data on number of rooms between servers

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
int64_t g_match_room_count;

void UpdateServerStatus(const Timer::Id &, const WallClock::Value &) {
  Json status;
  status["room_count"] = g_match_room_count;

  Rpc::SetStatus(status);
}

static bool Start() {
  ...
  Timer::ExpireRepeatedly(WallClock::FromSec(10), UpdateServerStatus);
  ...
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
static UInt64 the_match_room_count = 0;

public static void UpdateServerStatus(UInt64 id, DateTime at)
{
  JObject status = new JObject ();
  status["room_count"] = the_match_room_count;

  Rpc.SetStatus (status);
}

public static bool Start()
{
  ...
  Timer.ExpireRepeatedly (WallClock.FromSec (10), UpdateServerStatus);
  ...
}

Example: Choosing the PvP server with the fewest rooms

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
Rpc::PeerMap servers;
Rpc::GetPeersWithTag(&servers, "pvp");

Rpc::PeerId target;
int64_t minimum_room_count = std::numeric_limits<int64_t>::max();
for (const auto &pair: servers) {
  const Rpc::PeerId &peer_id = pair.first;

  Json status = Rpc::GetPeerStatus(peer_id);
  if (status.IsNull()) {
    continue;
  }

  if (not status.IsObject() ||
      not status.HasAttribute("room_count", Json::kInteger)) {
    LOG(ERROR) << "wrong server status: " << status.ToString();
    continue;
  }

  if (status["room_count"].GetInteger() < minimum_room_count) {
    minimum_room_count = status["room_count"].GetInteger();
    target = peer_id;
  }
}

// target is the least overloaded.
...
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
Dictionary<Guid, System.Net.IPEndPoint> servers;
Rpc.GetPeersWithTag(out servers, "pvp");

Log.Info ("Check Server Status");
Log.Info ("FindWith Tags = {0}", servers.Count.ToString());

System.Guid target;
UInt64 minimum_room_count = UInt64.MaxValue;
foreach (var pair in servers)
{
  System.Guid peer_id = pair.Key;

  Log.Info ("peer id = {0}", peer_id.ToString());
  JObject status = Rpc.GetPeerStatus (peer_id);
  if (status == null) {
    Log.Info("Status is null");
    continue;
  }

  if (status ["room_count"] == null)
  {
    Log.Error ( "wrong server status: {0}", status.ToString());
    continue;
  }

  if (status ["room_count"].Type != JTokenType.Integer)
  {
    Log.Error ( "wrong server status: {0}", status.ToString());
    continue;
  }

  if ( (UInt64) status ["room_count"] < minimum_room_count) {
    minimum_room_count =  (UInt64) status ["room_count"];
    target = peer_id;
  }
}

// target 변수가 가장 적은 부하를 받고 있으니 이 서버를 이용하도록 합니다.
...

18.3. Managing clients in distribution environments

18.3.1. 클라이언트와 아이펀 세션 연동 / 해제 (로그인 / 로그아웃)

로그인 기능을 통해서 사용자 ID 와 아이펀 세션을 연결해 줌으로써 상호 참조 가능한 상태로 만들 수 있습니다.

아이펀 세션 생성 시 사용자 ID 와 연동하기 (로그인)

// 연동할 사용자 ID
string id = "target_id";
if (not AccountManager::CheckAndSetLoggedIn(id, session)) {
  LOG(WARNING) << id << " is already logged in";
  return;
}
// 연동할 사용자 ID
string id = "target_id";
if (!AccountManager.CheckAndSetLoggedIn (id, session))
{
  Log.Warning ("{0} is already logged in", id);
  return;
}

Disconnecting from account ID when ending session (로그아웃)

AccountManager::SetLoggedOut(session);

// You can also disconnect by account id.
// string id = "target_id";
// AccountManager::SetLoggedOut(id);
AccountManager.SetLoggedOut (session);

// You can also disconnect by account id.
// string id = "target_id";
// AccountManager.SetLoggedOut (id);

Note

AccountManager::CheckAndSetLoggedInAsync()AccountManager::SetLoggedOutAsync() 를 이용하여 비동기로 처리할 수 있습니다. AccountManager::CheckAndSetLoggedInAsync() 에 max_retry 인자를 설정하여 로그인을 실패한 경우 지정한 횟수만큼 자동으로 재시도 하도록 처리할 수 있습니다. 로그인 실패 시 해당 account id 로그아웃 후 다시 로그인을 시도합니다. 이 때 로그아웃이 성공하면 로그아웃 콜백을 호출합니다. 이 후 로그인이 성공하거나 최대 재시도 횟수를 넘어가면 로그인 콜백을 호출합니다. 함수들의 자세한 내용은 API 문서 를 참고하세요.

Important

위 함수들은 Detecting unwanted rollbacks 에 설명된 대로 ASSERT_NO_ROLLBACK 으로 태깅되어있습니다. 이 때문에 이 함수들은 롤백이 발생할 수 있는 상황에서 사용되면 assertion 을 발생시킵니다.

18.3.2. Finding servers connected to clients

앞서 설명한 **로그인 처리**를 통해서 사용자 ID 와 아이펀 세션을 정상적으로 연결했다면, 다음과 같이 세션을 찾아낼 수 있습니다.

18.3.2.1. Searching by account ID

// 검색할 사용자 ID
string id = "target_id";
Rpc::PeerId peer_id = AccountManager::Locate(id);

if (not peer_id.is_nil()) {
  LOG(INFO) << id << " is connected to " << peer_id;
}
// 검색할 사용자 ID
string id = "target_id";
System.Guid peer_id = AccountManager.Locate (id);
if (peer_id != Guid.Empty)
{
  Log.Info("{0} is connected to {1}", id, peer_id.ToString ());
}

18.3.3. Sending packets to peer server clients

You can send packets to the account ID connected to a session through AccountManager::CheckAndSetLoggedIn(). This works even if sending to yourself rather than a peer.

The following example assumes AccountManager::CheckAndSetLoggedIn(“target_account_id”) was executed.

1
2
3
4
5
Json msg;
msg["message"] = "hello!";
msg["from"] = "my_id";

AccountManager::SendMessage("chat", msg, "target_account_id");
1
2
3
4
5
JObject msg = new JObject ();
msg ["message"] = "hello!";
msg ["from"] = "my_id";

AccountManager.SendMessage ("chat", msg, "target_account_id");

Important

Only available when accounts are set up in sessions through AccountManager::CheckAndSetLoggedIn().

Important

AccountManager::SendMessage() is tagged as ASSERT_NO_ROLLBACK, as explained in Detecting unwanted rollbacks. For that reason, this function raises assertions when used in situations with potential rollbacks.

Tip

By using the features explained in (Advanced) Server communication using RPC, you can send packets directly to users playing on peers.

18.3.4. Sending packets to all clients

18.3.4.1. Sending packets to all server sessions regardless of login

Use the Session::BroadcastGlobally() function to send messages to all sessions connected to a particular server group regardless of login status. The only TransportProtocol types that can be used are kTcp and kUdp.

Packets are sent to sessions connected to all servers in the example below. To send packets to all sessions connected to servers with the tag game rather than to all servers, change the 7th line to Rpc::GetPeersWithTag(&peers, "game", true);.

1
2
3
4
5
6
7
8
9
void BroadcastToAllSessions() {
  Json msg;
  msg["message"] = "hello!";

  Rpc::PeerMap peers;
  Rpc::GetPeers(&peers, true);

  Session::BroadcastGlobally("world", msg, peers, kDefaultEncryption, kTcp);
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
public static void BroadcastToAllSessions()
{
  JObject msg = new JObject ();
  msg ["message"] = "hello";

  Dictionary<Guid, System.Net.IPEndPoint> peers;
  Rpc.GetPeers(out peers, true);

  Session.BroadcastGlobally ("world",
                             msg,
                             peers,
                             Session.Encryption.kDefault,
                             Session.Transport.kTcp);
}

Tip

To send messages to all sessions connected to local servers, see Session::BroadcastLocally() in Sending messages to all sessions.

Important

Session::BroadcastGlobally() and Session::BroadcastLocally() are tagged as ASSERT_NO_ROLLBACK, as explained in Detecting unwanted rollbacks. For that reason, these two functions raise assertions when used in situations with potential rollbacks.

18.3.4.2. Sending packets to all clients logged into servers

In iFun Engine, when a user is logged in, AccountManager::CheckAndSetLoggedIn() is invoked and AccountManager::SetLoggedOut() is not yet called.

To send packets to all logged-in clients, use AccountManager::BroadcastLocally() and AccountManager::BroadcastGlobally(). The former sends packets only to clients connected to the current server, while the latter sends packets to all clients connected to multiple servers.

The only TransportProtocol types that can be used are kTcp and kUdp.

Example: Sending packets to all clients logged into a local server

1
2
3
4
5
6
void BroadcastToAllLocalClients() {
  Json msg;
  msg["message"] = "hello!";

  AccountManager::BroadcastLocally("world", msg, kDefaultEncryption, kTcp);
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
public void BroadcastToAllLocalClients()
{
  JObject msg = new JObject();
  msg["message"] = "hello";

  AccountManager.BroadcastLocally("world",
                                  msg,
                                  Session.Encryption.kDefault,
                                  Session.Transport.kTcp);
}

Example: Sending packets to all clients on all servers

1
2
3
4
5
6
7
8
9
void BroadcastToAllClients() {
  Json msg;
  msg["message"] = "hello!";

  Rpc::PeerMap peers;
  Rpc::GetPeers(&peers, true);

  AccountManager::BroadcastGlobally("world", msg, peers, kDefaultEncryption, kTcp);
}

Important

위 예제에서 만일 “game” 이라는 태그를 갖는 서버에게만 패킷을 보내고 싶으면 6번째 줄을 Rpc::GetPeersWithTag(&peers, "game", true); 로 바꾸면 됩니다.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
public void BroadcastToAllClients()
{
  JObject msg = new JObject();
  msg["message"] = "hello";

  Dictionary<Guid, System.Net.IPEndPoint> peers;
  Rpc.GetPeers(out peers, true);

  AccountManager.BroadcastGlobally("world",
                                   msg,
                                   peers,
                                   Session.Encryption.kDefault,
                                   Session.Transport.kTcp);
}

Important

If you want to send packets only to servers with the tag “game” in the example above, you can change the 7th line to Rpc.GetPeersWithTag(out peers, "game", true);.

Important

AccountManager::BroadcastGlobally() and AccountManager::BroadcastLocally() are tagged as ASSERT_NO_ROLLBACK, as explained in Detecting unwanted rollbacks. For that reason, these two functions raise assertions when used in situations with potential rollbacks.

18.3.5. Transferring clients to peers

18.3.5.1. MANIFEST.json settings

Set redirection_secret in MANIFEST.json’s AccountManager. You can change 32-byte random hexadecimal strings to hex strings to input. (64 characters)

{
  "AccountManager": {
    // hex 형식으로 표현한 비밀 값
    "redirection_secret": "a29fd424579997bf91e3..."
  }
}

You can create these easily using the following command.

$ python -c "import os; print ''.join('%02x' % ord(c) for c in os.urandom(32))"

When a token for client transfer is created, this value is used as a random seed.

Important

redirection_secret must be identical on all servers. This value is kept safely hidden. See Encrypting Data in MANIFEST.json if necessary.

18.3.5.2. 클라이언트를 다른 서버로 이동하기

매치메이킹 후 게임 서버로 이동하는 것처럼, 클라이언트가 한 서버에서 다른 서버로 옮겨가야할 경우에 AccountManager::RedirectClient() 를 함수를 통하여 클라이언트를 다른 서버로 이동 시킬 수 있습니다.

1
2
3
4
5
6
7
class AccountManager : private boost::noncopyable {
  ...
  static bool RedirectClient(
      const Ptr<Session> &session, const Rpc::PeerId &peer_id,
      const string &extra_data) ASSERT_NO_ROLLBACK;
  ...
};
1
2
3
4
5
6
class AccountManager {
  ...
  public static bool RedirectClient(
      Session session, System.Guid peer_id, string extra_data);
  ...
}

새로 연결하는 서버로 세션의 정보를 전달하려면 extra_data 필드를 통해 전달 할 수 있습니다.

AccountManager::RedirectClient() 함수 사용시에 아래 내용을 유의해주세요.

Warning

정상적으로 클라이언트를 다른 서버로 이동하기 위해서는 옮겨갈 클라이언트가 AccountManager::CheckAndSetLoggedIn() 함수를 이용해서 로그인한 상태여야 합니다.

Warning

extra_data 값은 클라이언트를 통해서 전달되기 때문에, 클라이언트를 통해 공유해서 안되는 정보는 Rpc를 통해서 서버간에 직접 전송해야 합니다.

Warning

AccountManager::RedirectClient() 함수 호출 후 로그아웃, 로그인 과정은 엔진 내부에서 처리하고 있으므로 별도로 추가적인 로그인, 로그아웃 관련 작업은 필요하지 않습니다.

Important

게임 서버가 TCP (권장) 혹은 UDP를 사용하게끔 설정되어야 합니다. HTTP 는 요청-응답 형태의 프로토콜이므로 클라이언트가 요청하지 않은 패킷을 서버가 먼저 보낼 수 없어서 지원되지 않습니다.

예제: 특정 서버로 클라이언트를 이동 시키기

1
2
3
4
5
6
7
Rpc::PeerId destination_server = ...  // Selected from the result of Rpc::GetPeers().

std::string extra_data = "";

if (not AccountManager::RedirectClient(session, destination_server, extra_data)) {
  return;
}
1
2
3
4
5
6
7
8
System.Guid destination_server = ... // Selected from the result of Rpc.GetPeers().

string extra_data = "";

if (!AccountManager.RedirectClient (session, destination_server, extra_data))
{
  return;
}

18.3.5.3. 이동 메시지 처리 과정

RedirectClient 함수가 호출 되고 서버 이동이 진행되는 과정은 아래와 같습니다.

우선, 기존 서버에서 유저를 로그아웃 (AccountManager::SetLoggedOut() 함수에 해당) 시킵니다. 로그아웃이 정상적으로 성공한다면 클라이언트 측으로 이동할 서버의 정보 및 랜덤 인증 토큰이 담긴 메시지(_sc_redirect) 를 전송 한 뒤 세션을 종료합니다. 이동 메시지 처리 과정 중 로그아웃, 로그인 과정은 엔진 내부에서 처리하고 있으므로 별도로 추가적인 로그인, 로그아웃 관련 작업은 필요하지 않습니다.

클라이언트 측은 이동 메시지를 받고 난 뒤 기존 서버와의 연결을 해제하고 이동 메시지에 포함된 새 서버의 정보를 통해 연결을 시도하고 기존 서버로부터 받은 랜덤 인증 토큰을 이용해서 새 서버에서 인증을 시도해야합니다.

클라이언트가 정상적으로 이동했다면, 이동할 서버는 랜덤 인증 토큰을 이용하여 클라이언트를 검증합니다. 검증이 정상적으로 끝났다면 서버는 다시 클라이언트를 로그인 시킵니다.

Note

서버로부터 받은 이동 메시지는 아이펀 엔진이 제공하는 플러그인에서 자동으로 처리하기 때문에 클라이언트에서 수작업으로 처리하실 필요는 없습니다.

참고로 플러그인은 다음과 같은 작업을 처리합니다.

  1. 기존 서버와의 연결을 해제

  2. 새 서버와 연결

  3. 기존 서버로부터 받은 랜덤 인증 토큰을 이용해서 새 서버에서 인증 시도

클라이언트 플러그인은 2단계를 처리하는 동안에 호출될 콜백을 지원합니다. 예를 들어, 암호화 타입 지정하기, 넘겨 받은 flavor 정보에 따라 추가적인 설정하기 등의 작업을 할 수 있습니다.

자세한 내용은 클라이언트 플러그인 설명 중 Server redirect 를 참고하세요.

18.3.5.4. 새 서버에서 옮겨온 클라이언트에 대한 처리

클라이언트는 새 서버에 접속 후, 이전 서버가 보내준 랜덤 토큰으로 인증 과정을 거칩니다. 이 인증 과정은 아이펀 엔진이 자체적으로 수행하지만, 그 결과에 따른 후속 처리는 게임 서버에서 직접해야됩니다.

인증 결과를 받기 위해서는 다음처럼 콜백함수를 설정해야됩니다.

1
2
3
4
5
bool MyProject::Start() {
  ...
  AccountManager::RegisterRedirectionHandler(OnClientRedirected);
  ...
}
1
2
3
4
5
6
public static bool Start ()
{
  ...
  AccountManager.RegisterRedirectionHandler (OnClientRedirected);
  ...
}

이제 클라이언트가 이동해서 들어오는 경우 아이펀 엔진은 앞에서 등록된 콜백함수를 호출해줍니다. 이 때 원래 서버에서 AccountManager::RedirectClient() 에 인자로 넘긴 extra_data 를 클라이언트로부터 받아서 같이 넘겨줍니다.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
void OnClientRedirected(const std::string &account_id,
                        const Ptr<Session> &session,
                        bool success,
                        const std::string &extra_data) {
  if (success) {
    // Authentication succeeded.
    ...
  } else {
    // Authenticated failed.
    ...
  }
}
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
public static void OnClientRedirected (string account_id,
                                       Session session,
                                       bool success,
                                       string extra_data)
{
  if (success)
  {
    // Authentication succeeded.
    ...
  }
  else
  {
    // Authenticated failed.
    ...
  }
}

18.4. Configuring and managing Zookeeper

iFun Engine uses Zookeeper to implement its distribution system.

This section outlines how to use Zookeeper. For more details, please visit the Zookeeper official website.

18.4.1. Installing Zookeeper

Install Zookeeper with the following commands.

Tip

Download and install the latest version from the Zookeeper official website.

Ubuntu:

$ sudo apt-get update
$ sudo apt-get install zookeeper zookeeperd
$ sudo service zookeeper start

CentOS 6:

$ sudo yum install zookeeper
$ sudo service zookeeper start

CentOS 7:

$ sudo yum install zookeeper
$ sudo systemctl enable zookeeper
$ sudo systemctl start zookeeper

18.4.2. Using the command line tool

Use zkCli.sh to view Zookeeper data created by iFun Engine. Use cli with the command below to connect and use the ?(question mark) command to see commands that can be used.

$ cd /usr/share/zookeeper/bin/
$ ./zkCli.sh
...
[zk: localhost:2181(CONNECTED) 1]

18.4.3. Zookeeper directory made by iFun Engine

iFun Engine creates the following directory in Zookeeper. You cannot modify/delete it or make any other directory.

  • /{{ProjectName}}/servers

  • /{{ProjectName}}/keys

  • /{{ProjectName}}/objects

  • /{{ProjectName}}/active_accounts

18.4.4. Zookeeper profiling

iFun Engine measures Zookeeper processing time statistics used to share objects. To use this feature, the following must be enabled:

To view these statistics, invoke the following API.

GET http://{server ip}:{api-service-port}/v1/counters/funapi/distribution_profiling/

Statistics show the time taken to process Zookeeper commands, and their types and meanings are as follows.

Statistic types

Description

all_time

All-time statistics

last1min

Statistics from the last minute

execution_count

Number of times that command was processed

execution_time_mean_in_sec

Average processing time

execution_time_stdev_in_sec

Standard deviation in processing time

execution_time_max_in_sec

Maximum execution time

Sample statistical results

{
    "zookeeper": {
        "nodes": "localhost:2181",
        "client_count": 10,
        "all_time": {
            "execution_count": 105213,
            "execution_time_mean_in_sec": 0.00748,
            "execution_time_stdev_in_sec": 0.026617,
            "execution_time_max_in_sec": 0.249311
        },
        "last1min": {
            "execution_count": 0,
            "execution_time_mean_in_sec": 0.0,
            "execution_time_stdev_in_sec": 0.0,
            "execution_time_max_in_sec": 0.0
        }
    }
}

18.4.5. Checking Zookeeper status

1) Getting statistics from the Zookeeper server:

$ echo stat | nc localhost 2181

Zookeeper version: 3.4.5--1, built on 06/10/2013 17:26 GMT
Clients:
 /0:0:0:0:0:0:0:1:38670[0](queued=0,recved=1,sent=0)
 /0:0:0:0:0:0:0:1:38457[1](queued=0,recved=9469,sent=9469)

Latency min/avg/max: 0/31/334
Received: 1177235
Sent: 1417245
Connections: 2
Outstanding: 0
Zxid: 0x80eb3a9
Mode: standalone
Node count: 10

2) Resetting Zookeeper statistics:

$ echo srst | nc localhost 2181

Server stats reset.

3) Checking Zookeeper status:

imok means “I’m OK” and is normal.

$ echo ruok | nc localhost 2181

imok

18.4.6. Zookeeper guidelines

Please read the following recommendations to implement Zookeeper clusters to be used in the actual service.

18.4.6.4. JVM configuration

The JVM heap must be set to lower than the system memory. Otherwise, a memory swap may occur and critically impact overall performance. JVM configuration including heap size is at /etc/default/zookeeper for Ubuntu and /etc/zookeeper/java.env for CentOS.

18.5. (Advanced) Server communication using RPC

iFun Engine supports RPC for communication between servers. Define the first RPC message to be used in Protobuf, and when that RPC message is received, register the invoked handler function.

18.5.1. Defining RPC messages

Servers communicate using Protobuf. When the project is created, a file named {{ProejctName}}_rpc_messages.proto is also created in the SRC directory. You need to define RPC messages you write in FunRpcMessage extended format.

Note

For an explanation of Google Protobuf extensions and syntax, see Google Protocol Buffers.

Important

When extending FunRpcMessage, you must use field numbers starting at 32. 0 to 31 are used by iFun Engine.

MyRpcMessage and EchoRpcMessage that send text strings have been defined below.

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
message MyRpcMessage {
  optional string message = 1;
}

message EchoRpcMessage {
  optional MyRpcMessage request = 1;
  optional MyRpcMessage reply = 2;
}

extend FunRpcMessage {
  optional MyRpcMessage my_rpc = 32;
  optional EchoRpcMessage echo_rpc = 33;
}

18.5.2. Defining a message handler

Define a handler function to receive and handle messages. This handler can have two forms depending on whether it requires an explicit response or not.

18.5.2.1. Handlers for messages that don’t require an explicit response

If the handler doesn’t need to respond to RPC messages, you can create it as follows.

void OnMyRpc(const Rpc::PeerId &sender, const Rpc::Xid &xid,
             const Ptr<const FunRpcMessage> &request) {
  BOOST_ASSERT(request->HasExtension(my_rpc));
  const MyRpcMessage &msg = request->GetExtension(my_rpc);

  LOG(INFO) << msg.message() << " from " << sender;
}
public static void OnMyRpcHandler(Guid sender, Guid xid, FunRpcMessage request) {
  MyRpcMessage msg = null;

  if (!request.TryGetExtension_my_rpc (out msg))
  {
    return;
  }
  Log.Info ("{0} from {1}", msg.message, sender);
}

Note

If written not to explicitly respond, iFun Engine sends dummy responses internally.

18.5.2.2. Handlers for messages that require an explicit response

Handlers that must send responses receive Rpc::ReadyBack finishers as the last parameter. The finisher must be called with the RPC response after processing in the handler is finished. If not, the server keeps waiting for the RPC response.

Handler for “echo”:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
void OnEchoRpc(const Rpc::PeerId &sender, const Rpc::Xid &xid,
               const Ptr<const FunRpcMessage> &request,
               const Rpc::ReadyBack &finisher) {
  BOOST_ASSERT(request->HasExtension(echo_rpc));
  const EchoRpcMessage &echo = request->GetExtension(echo_rpc);
  const MyRpcMessage &echo_req = echo.request();

  LOG(INFO) << echo_req.message() << " from " << sender;

  Ptr<FunRpcMessage> reply(new FunRpcMessage);
  reply->set_type("echoreply");
  EchoRpcMessage *echo2 = reply->MutableExtension(echo_rpc);
  MyRpcMessage *echo_reply = echo2->mutable_reply();
  echo_reply->set_message(echo_req.message());

  finisher(reply);
}

Handler for “echoreply”:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
void OnEchoReplyRpc(const Rpc::PeerId &sender, const Rpc::Xid &xid,
                    const Ptr<const FunRpcMessage> &reply) {
  if (not reply) {
    LOG(ERROR) << "rpc call failed";
    return;
  }

  const EchoRpcMessage &echo = reply->GetExtension(echo_rpc);
  const MyRpcMessage &echo_reply = echo.reply();
  LOG(INFO) << "reply " << echo_reply.message() << " from " << sender;
}

Handler for “echo”:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
19
20
21
22
public static void OnEchoRpcHandler(Guid sender, Guid xid, FunRpcMessage request, Rpc.ReadyBack finisher)
{
  Log.Info ("OnEchoRpcHandler");

  EchoRpcMessage echo_req = null;

  if (!request.TryGetExtension_echo_rpc (out echo_req))
  {
    return;
  }

  Log.Info ("{0} from {1}", echo_req.request.message, sender);

  FunRpcMessage reply = new FunRpcMessage();
  reply.type = "echoreply";
  EchoRpcMessage echo2 = new EchoRpcMessage();
  echo2.reply = new MyRpcMessage();
  echo2.reply.message = echo_req.request.message;
  reply.AppendExtension_echo_rpc(echo2);

  finisher (reply);
}

Handler for “echoreply”:

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
public static void OnEchoReplyRpc(Guid sender, Guid xid, FunRpcMessage reply)
{
  if (reply == null)
  {
    Log.Error ("rpc call failed");
    return;
  }

  EchoRpcMessage echo = null;

  if (!reply.TryGetExtension_echo_rpc(out echo))
  {
    return;
  }

  Log.Info ("{0} from {1}", echo.reply.message, sender);
}

Note

Since the transaction ID (XID) is used to judge responses to RPC requests when they are made, response type text string is unimportant, in contrast to RPC requests. The text string only needs to not be blank. The XID used in requests and responses is automatically set by iFun Engine.

18.5.3. Registering a message handler

Handlers are mapped and registered according to RPC type at the end. For this, add code as follows to the server’s Install() function.

18.5.3.1. For handlers that don’t respond

Rpc::RegisterVoidReplyHandler("my", OnMyRpc);
Rpc.RegisterVoidReplyHandler ("my", OnMyRpc);

18.5.3.2. For handlers that respond

Rpc::RegisterHandler("echo", OnEchoRpc);
Rpc.RegisterHandler ("echo", OnEchoRpc);

18.5.4. Discovering the ID of the server that received the message

You can use the method explained in Exporting server lists or 클라이언트와 아이펀 세션 연동 / 해제 (로그인 / 로그아웃) to learn the ID of the server that will receive RPC messages.

18.5.5. Sending messages

Once you know the PeerId of the server to send using the method above, you can send messages as follows.

18.5.5.1. For messages with no response

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
Rpc::PeerMap peers;
Rpc::GetPeers(&peers);
Rpc::PeerId target = peers.begin()->first;

Ptr<FunRpcMessage> request(new FunRpcMessage);
// type 은 RegisterHandler 에 등록된 type 과 같아야합니다.
request->set_type("my");
MyRpcMessage *msg = request->MutableExtension(my_rpc);
msg->set_message("hello!");
Rpc::Call(target, request);
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
Dictionary<Guid, System.Net.IPEndPoint> peers;
Rpc.GetPeers (out peers);

Guid key = peers.First ().Key;

FunRpcMessage request = new FunRpcMessage ();
request.type = "my";
MyRpcMessage echomsg = new MyRpcMessage ();
echomsg.message = "hello";

request.AppendExtension_echo_rpc (echomsg);
Rpc.Call (key, request);

18.5.5.2. For messages with a response

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
Rpc::PeerMap peers;
Rpc::GetPeers(&peers);
Rpc::PeerId target = peers.begin()->first;

Ptr<FunRpcMessage> request(new FunRpcMessage);
request->set_type("echo");
EchoRpcMessage *echo = request->MutableExtension(echo_rpc);
MyRpcMessage *echo_request = echo->mutable_request();
echo_request->set_message("hello!");
Rpc::Call(target, request, OnEchoReplyRpc);
 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
Dictionary<Guid, System.Net.IPEndPoint> peers;
Rpc.GetPeers (out peers);

Guid key = peers.First ().Key;

FunRpcMessage reply_request = new FunRpcMessage ();
reply_request.type = "echo";
EchoRpcMessage reply_msg = new EchoRpcMessage ();
reply_msg.reply.message = "hello";

reply_request.AppendExtension_echo_rpc (reply_msg);
Rpc.Call (key, reply_request, OnEchoReplyRpc);

Important

Rpc::Call() is tagged as ASSERT_NO_ROLLBACK, as explained in Detecting unwanted rollbacks. For that reason, this function raises assertions when used in situations with potential rollbacks.

18.6. Distribution parameters

18.6.1. AccountManager

Handles features including player login, logout, and movement between servers. This feature also operates in distribution environments with several servers.

  • redirection_strict_check_server_id: Checks server ID when client moves to connect to a different server. (type=bool, default=true)

  • redirection_prefer_hostname: Choose between DNS hostname and IP address when client moves to a new server. (type=bool, default=true)

  • redirection_secret: Secret key to authenticate when client moves to connect to a different server (type=string)

18.6.2. RpcService

Controls communication between servers.

  • rpc_enabled: Enables RPC functions and other RPC-dependent functions. (type=bool, default=false)

  • rpc_threads_size: Number of threads handling RPC processing (type=uint64, default=4)

  • rpc_port: TCP port number to use for RPC server. (type=uint64, default=8015)

  • rpc_nic_name: Network interface (NIC) used in RPC communication. For security reasons and to reduce external cloud network usage, it is better to choose a network card that connects to an internal network. (type=string, default=””)

  • rpc_use_public_address: Forces use of a public IP rather than NIC address to handle RPC. This is used in situations like a cloud environment where the NIC IP is a private IP and the public ID is different. (type=bool, default=false)

  • rpc_tags: Tags set on the relevant server. Can import server list with particular tags within code

    E.g.) If set as follows, you can only choose servers with level 1-5 tags or server lists with dungeon_server tags.

    "rpc_tags": [ "dungeon_server", "level:1-5" ]
    
  • rpc_message_logging_level: Log level for RPC messages. If 0, no logs are kept. If 1, transaction ID, partner server ID, and message type and length are logged. If 2, the preceding information and the message body are logged. (type=uint64, default=0)

Parameters with configurations that are almost never changed manually

  • rpc_backend_zookeeper: Uses Zookeeper for RPC communication (type=bool, default=true)

rpc_disable_tcp_nagle: Disables Nagle algorithm as a TCP_NODELAY socket option setting when TCP sessions are used (type=bool, default=true) enable_rpc_reply_checker: When set to true, outputs warning messages if there is no RPC response within 5 sec. (type=bool, default=true)

18.6.3. ZookeeperClient

Controls connection with Zookeeper when iFun Engine uses Zookeeper for connection between servers.

  • zookeeper_nodes: Zookeeper server lists. Listed in the form of “IP:port” and comma-separated. (type=string, default=”localhost:2181”)

  • zookeeper_client_count: Sets multiple connections to Zookeeper at a time. (type=uint64, default=4)

  • zookeeper_session_timeout_in_second: Zookeeper session timeout time. (type=uint64, default=60)

  • zookeeper_log_level: Zookeeper library log level. (type=uint64, default=1)

Important

Servers communicating through distribution must have the same Zookeeper settings, including Zookeeper node address. They need to connect to the same Zookeeper server.

Important

When running 2 or more servers on a single device, ports in SessionService, RpcService, and ApiService must not overlap.