Jump to content

Guest - Member Global Message

This is the Header Alert Global Message description. This message can be changed in your themes acp.

Tactical Advance
MyImageName

Banner 1 Title


MyImageName

Banner 2 Title


MyImageName

Banner 3 Title


Lumpy

Server question.

Recommended Posts

I think the answer might be obvious but I have to ask. When we go live, are we all going to be playing on a UK server if we want to play with org mates?  In my online gaming experience there is no connection between servers. They are separate worlds. I wonder if technology has changed enough that distance from server is less of an issue?  Please educate me guys:-)  I hope I don't have to change to an org that is in North America!  If I'm missing something, give me a kick.

Share this post


Link to post
Share on other sites

That's good news. I'm not sure how it's all going to work with no lag between players half way around the world??  Technology has passed me by :x  Thanks Tac.

 EDIT:  I did some searching and found the info on how it will work...clear as mud ha-ha.

Edited by Lumpy

Share this post


Link to post
Share on other sites

Even as someone who knows network stuff pretty well, it is still clear as mud.

We know that you'll connect to the AWS servers closest to you. Those servers handle various aspects of Star Citizen connectivity (at a minimum, logging you in and passing you data from upstream servers), but as far as I know they are NOT the central brains that track mission availability, item state, persistence, etc. All of that stuff is, I believe, intended to be on a central set of servers that the other servers connect to (but those servers are also in AWS so are accessed very quickly even if far away). But I could be wrong, and it all ultimately comes down to just how well they are able to implement the long term plans. We know that they'd like to be able to have "thousands" of players in a single instance, but who knows if that'll turn out to be possible or not. And I haven't been able to figure out if the process that manages a single instance will be on a central server (in which case you'll more or less have the same game experience if playing with a mix of UK and US players) or at a geographically located server (in which case you'll be lagging a good bit if you play a UK instance from the US).

Share this post


Link to post
Share on other sites

You may be right that you connect to a central auth server first and then it forwards you to what it thinks is the best AWS endpoint, but I haven't seen anything indicating that yet. From the SC network diagrams I've (quickly) glanced at you first connect to the closest AWS based on nothing more than best ping (or manual selection), and that connects to ...whatever (auth server, instance, etc)... for you from there. It basically acts as your relay to the SC network. Instead of typing this I should have just re-watched the video with the diagrams again to try to find out for sure... but I'm feeling too lazy at the moment ;-)

As far as the specific instance you get assigned to, I think you're *probably* right that it will be primarily geographically based, as that guarantees the lowest latency, but I think it may be more complicated. If you're flying in a part of space where an instance has already been created, I think you may be placed in that instance even if it is running on a server that is not geographically near you. Heck, I'm not even totally sure that they might not just run all the instances in the US if the latency between AWS endpoints is low enough (theoretically the ping between Europe and the US could be as low as 50ms if they have dedicated connections). If you have a ping of 50ms to your AWS endpoint, and then add 40ms to that, it is still playable at 90ms. I'd guess they won't do that because of the howls of protest from those outside the US (not to mention people playing from Australia where the extra latency would be more like 80+ ms), but who knows. It could be that even if they initially put you in a geographically local instance at the start that you'll end up in a non-local instance as soon as anything interesting happens. That is to say, if there's suddenly a thousand person event, the instance will probably be created where most of the players are, right? Just makes sense. And one might think that'd tend to be the US. But again, it is all speculation. Who knows. Theoretically, at least, an instance based in one geographical location could be dynamically moved to a server in another region on the fly, complicating things even further (the magic of cloud computing).

In short, they're planning some highly unconventional cutting edge stuff here, so I think conventional wisdom about how all this normally works may not apply.

Or alternatively (and this would kinda suck in my opinion) they ultimately decide to stay conventional and keep everything geographically separated to guarantee best ping and fairness, so US players all stay in US instances and never even see UK players, and vice versa (unless a US player specifically decides to take the lag hit and logs in to a UK AWS server). That's the way games usually do it, but I'd hate for SC to end up being like that. And to be honest, if it were like that, and I'd see little point in being in a UK based org even if there were lots of US members. What would be the point? I'd maybe be strategizing with the whole org and sharing intel, but I'd never actually being playing online with anyone other than people from the US. So I definitely hope (and expect) that the game instances will work in a way that US and UK peeps can run into each other and play together.

Edited by Drakin

Share this post


Link to post
Share on other sites

Drakin - my PoV on the topic stays the same for an year or so .. the most logical (heh again) way to have good experience for everybody would be to create multiple instances of the same "popular" place that are geo based ... and ensure that there is certain visibility between events happening in each of those "sub-places"

Share this post


Link to post
Share on other sites

Even as someone who knows network stuff pretty well, it is still clear as mud.

We know that you'll connect to the AWS servers closest to you. Those servers handle various aspects of Star Citizen connectivity (at a minimum, logging you in and passing you data from upstream servers), but as far as I know they are NOT the central brains that track mission availability, item state, persistence, etc. All of that stuff is, I believe, intended to be on a central set of servers that the other servers connect to (but those servers are also in AWS so are accessed very quickly even if far away). But I could be wrong, and it all ultimately comes down to just how well they are able to implement the long term plans. We know that they'd like to be able to have "thousands" of players in a single instance, but who knows if that'll turn out to be possible or not. And I haven't been able to figure out if the process that manages a single instance will be on a central server (in which case you'll more or less have the same game experience if playing with a mix of UK and US players) or at a geographically located server (in which case you'll be lagging a good bit if you play a UK instance from the US).

I believe their goal is that if you are solo you will always be placed in a instance that is best suited to you. (lowest latency and pkt loss) however if you are in a group you will be placed in the best possible (cluster/servers/ec2's whatever they do) for all players in that group. They will try to bring the latency as close as possible to to your group members. Then the other players you see in that area will be in the same boat its the best server for them. A real issue arises is when you are chasing someone and you have to instance transfer. You can designate someone as a focus (this has been confirmed) so they can't instance transfer away from you and you can chase them into their new instance, but in that case a problem arises is likely the new transfer will be based on the person your chasing not on your connection.

Share this post


Link to post
Share on other sites

Technically I believe that radar / scaner window could be based on the whole instance .. and only when someone start trying to shoot another mark or hail another mark etc. both of them shall appear within same instance :)

Share this post


Link to post
Share on other sites

I started that discussion there :-)

Technically I believe that radar / scaner window could be based on the whole instance .. and only when someone start trying to shoot another mark or hail another mark etc. both of them shall appear within same instance :)

I'm not sure the long range scanners will have anything to do with instancing at all. If I were designing it, I wouldn't tie it to instancing, anyway.

Instances are for handling real-time interactions, but it can be argued that a long range scan doesn't have to really be real-time at all. It can just pick up objects / events in a certain range within the last few minutes. Thus, you could base the long range scanner on the data in a server that just tracks "last reported position" of objects / events (ie: item tracker).

Short range radar / scanning needs to be far more responsive, so I do agree that needs to be handled at the instance level.

I believe their goal is that if you are solo you will always be placed in a instance that is best suited to you. (lowest latency and pkt loss) however if you are in a group you will be placed in the best possible (cluster/servers/ec2's whatever they do) for all players in that group. They will try to bring the latency as close as possible to to your group members. Then the other players you see in that area will be in the same boat its the best server for them. A real issue arises is when you are chasing someone and you have to instance transfer. You can designate someone as a focus (this has been confirmed) so they can't instance transfer away from you and you can chase them into their new instance, but in that case a problem arises is likely the new transfer will be based on the person your chasing not on your connection.

That's my understanding so far as well. The question then arises as to how they'll handle the additional lag that the people who are placed in a non-local instance will be experiencing. Not only will they be at a latency disadvantage, but since it will all be handled transparently they won't even *know* they're at a disadvantage, other than discovering that they're not landing hits the way they'd normally expect.

Share this post


Link to post
Share on other sites

I started that discussion there :-)

I'm not sure the long range scanners will have anything to do with instancing at all. If I were designing it, I wouldn't tie it to instancing, anyway.

Instances are for handling real-time interactions, but it can be argued that a long range scan doesn't have to really be real-time at all. It can just pick up objects / events in a certain range within the last few minutes. Thus, you could base the long range scanner on the data in a server that just tracks "last reported position" of objects / events (ie: item tracker).

That is exactly what I'm saying ... and only area within 20km radius or so ... is a subject of an instancing :) .. and even in this area ... in case there are 1000+ ships ... no need to show all  of them to each client :)

Share this post


Link to post
Share on other sites
Guest
You are commenting as a guest. If you have an account, please sign in.
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoticons maximum are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

Loading...

MyImageName

Image Description 1


MyImageName

Image Description 2


MyImageName

Image Description 3


×