Jump to content

MeatShield

Pathfinder
  • Content Count

    25
  • Joined

  • Last visited

Community Reputation

2 Neutral

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Game.ini: GameUserSettings.ini: https://server.nitrado.net/usa/news2/view/atlas-major-update-5-1-bring-new-game-modes-new-settings-items-and-more/
  2. The Linux dedicated server files don't work. I'm running my dedicated server under a windows VM using http://www.phoenix125.com/AtlasServerUpdateUtil.html. The author is active on the forum and is responsive to bugs and other issues.
  3. @Phoenix125 Take a look at ProcessWaitClose() It blocks until it's done and it accepts a timeout parameter. You should be able to get rid of the spin wait, and the timer. (Sorry in advance, I don't have an AutoIT IDE, and have never touched it before.) Again, no rush.
  4. The keep alive program operates like some of the old school malware. You'd have 2 programs each checking to see if the other was running and restart it if it wasn't. This made it difficult for normal users to kill the process. That's basically what the keep alive program does. I added an exception for it.
  5. @Phoenix125 Awesome! Can we add one more bug? When you change the setting PlayerDefaultNoDiscoveriesMaxLevelUps it gets written to the game.ini file with a space before the equals sign. ex: PlayerDefaultNoDiscoveriesMaxLevelUps =35
  6. @Phoenix125 Yup, I have 2 physical servers each running half of the grids. The grids on the other servers are marked as remote, but I am polling them. I will disable Poll Remote Servers on one of my 2 and see if that makes an impact.... and 2 minutes later the one I disabled it on stopped growing.... I swapped and the growing swapped. So..... I think we know where the problem lies.... I turned them both to off and I'll let them run for a few hours to see. I had a thought about the hanging. I really don't know much about AutoIT but maybe there is a limit on how many handles it can have open and since they were not being closed it eventually ran out and doesn't handle it gracefully. I think everyone with the hanging issue also have the memory leak issue.
  7. @Phoenix125 Thanks for the response. Based on the log file it looks like you may not be freeing some memory if the rcon command fails. I'm running a 6x6, but only the center 4x4 are turned on normally. The outer ring are special grids with the power stones and whatnot. Plan is to turn one or 2 on when we're ready. so every time it tries to check the player count, 20 servers are off. Long story short the log file consists of the following every 3 minutes: I read over your source and I think the issue is lines 7725 through 7734 It looks like you're opening a handle to STDOUT and STDERR, but only reading STDOUT. StdoutRead() frees the resources for STDOUT, but the resources are never freed for STDERR. I'm guessing that STDERR is being populated when mcrcon.exe is sent SIGKILL from ProcessClose(). However, even if there weren't data in STDERR, just leaving the handle open would cause a slow memory leak. I think you want StdioClose() https://www.autoitscript.com/autoit3/docs/functions/StdioClose.htm I hope this helps.
  8. So that's on the high end so that's good. It's going to come down to population for you. For us we've got more grids than people so the majority sit idle. If you're in the same boat, you should be able to run 8 ish. However if you're going to have a good sized population you probably want to have at least 1 core per grid, so 4.
  9. @Angerrising I run a low pop server, so your mileage may vary... Without memory compression I'm averaging about 1.9 GB per grid, 2.7 GB high, 1.4 GB low. With compression they average about 675 MB per, 800 MB high, 500 MB low. The compression takes about a day to get settled. It doesn't seem to impact performance in a noticeable way. I'd bet your bottle neck would be CPU. Which i7 do you have, there's a huge difference between a i7-4770TE and a i7-4790K. I'm running 8 grids on a i5-8259U, and that's comparable to a high end i7 devils canyon. Runs without issues, but again, it's just a handful of people spread across all of the grids.
  10. Hey @Phoenix125 I really like the utility, I'm running it on 2 servers for my cluster. Couple questions/comments. I have also had the issue with the 2.1.2 64 bit client hanging. I'm able to kill it and restart it without loosing my servers, so no big deal. Next time it happens, what do you need me to collect to help you track it down. If it was linux, I'd attach strace to it... anything I can do under windows to help figure out what it's stuck on? Do you have plans to multi thread the rcon calls? Maybe let us a maximum number of workers too. It takes a while to churn through all of the grids and 99% of the time is waiting. Any chance you can modify the zero pop low cpu priority code to look at adjacent grids too. So a grid will go to low priority if it, and it's adjacent grids, have zero population (or are turned off). It would also be nice to be able to white list grids to always run at normal priority. It can be a little painful sailing to a new grid that is running at low priority when the population check only happens every 30 seconds. Also being able always run freeports as normal, and any grid that has permanent player bases too. When I start the utility, it takes up about 50 MB of memory. It's increasing at a rate of about 1 MB every few minutes. After a day or so it's around 3 GB. I'm running 2.1.2 x64. I'm hesitant to call it a memory leak, because some many people throw that word around without a firm grasp of what it is... but it really looks like a memory leak. Please let me know if I can help track it down. I don't mind restarting the program once a day or so. Thanks,
  11. The windows dedicated server works fine. It's annoying, but I setup a windows 10 VM and I'm running my servers out of it. Without memory compression I'm averaging about 1.9 GB per grid, 2.7 GB high, 1.4 GB low. With compression they average about 675 MB per, 800 MB high, 500 MB low. Once the linux dedicated server works again it should just be copying some files over. I wouldn't wait for the linux client if you want to host your own.
  12. Did you set spawnPointRegionOverride?
  13. Do you have NAT Reflection/Loopback/Hairpinning setup too? Can you use a steam link to the local IP and join that way (won't work for server transitions though) steam://connect/{IP}:{PORT}
  14. MeatShield

    Upcoming Patch and Happy Lunar New Year.

    Will we have working Linux Dedicated servers again?
  15. I've seen something similar with my servers, but not to the extent you're seeing. If I log in near an edge, sometimes I see that the adjacent servers are red, but that disappears after a few seconds. If I'm starting the entire cluster and join, sometimes the red sticks around a little longer but I have never had to restart the cluster, or even individual servers. I think the issue is your hardware is under powered. There are 3 things to keep in mind with your CPUs, Overall performance, and Per Thread performance, and Cache. All aren't that great compared to modern CPUs, even though you have 2. There are a lot of operations that don't parallelize well, especially in gaming. It would be great if the game server(s) could be parallelized evenly across all of your cores, but realistically don't count on it spreading out across. When a server boots, it pegs a core at 100%, there's some stuff on the side, but for the bulk of it doesn't parallelize. https://www.cpubenchmark.net/cpu.php?cpu=Intel+Xeon+E5450+%40+3.00GHz&id=1236&cpuCount=2 Your chips were released in 2009 overall performance you're looking at 4179 per chip, or about 7499 overall. Their Single Thread performance was great for the time but lacking today. They score a 1266, most modern mid-range chips are double that. For comparison, my old 6600k has an overall score of 8061 and a per thread score of 2147. I'm currently running a Ryzen 3700x, overall 23840 and 2907 per thread. Cache is also important. https://en.wikipedia.org/wiki/CPU_cache . If you don't know, CPUs have cache, which is like a faster form of memory. CPUs have different tiers of cache, Level 1, Level 2, and sometimes Level 3 and Level 4 these days. L1 is the fastest and smallest, L2 is larger and slower, but still faster than main memory. L3 is slower than L2 but still faster than main memory. They all mirror chunks of main memory. so everything in L3 is in main memory, everything in L2 is in L3, etc... Long story short, the CPU can only operate on stuff in cache, if it's not in L1, it looks in L2, if it's not there, L3, etc until it finally has to get it from main memory. This takes time so the more cache the better. We can ignore the levels and just think about it as a single cache pool because the differences between in cache and not is what really matters. If we have to hit main memory it's orders of magnitude more of a performance hit than between L1 and L2. Each of your CPUs have 12 MB of cache. If you're running a lot of active processes you may run into a situation where the working set doesn't fit into cache, it's going to have to keep swapping out the contents of cache for each of the processes. This takes time, and can result in cache thrashing. https://en.wikipedia.org/wiki/Thrashing_(computer_science) Knowing all of that, here is what I think is going on. When you log into the server, it checks the adjacent servers those 4 servers start processing. So you've got 5 servers trying to run at the same time. The processes are trying to run at the same time and fighting for cache. Instead of getting meaningful work done, it's thrashing. It spends more time swapping memory in and out of cache than it does processing the actual work that needs to be done. I did some experimentation on my server and when I joined a server after not being on for 8 hours, after I joined I did see several other of the servers spike in CPU usage for about a second. Also when I was working on setting everything up a few weeks ago I ran into a thrashing issue (disk, not cpu) when I tried to start up too many of the servers at the same time. Everything ground to a halt. Next time this happens, open up Resource Monitor and take a look. Are the server instances (ShooterGameServer.exe) pegged at 100%. Also check out the disk too. It could also be an issue with all the servers trying to hit the storage at the same time and thrashing on that. If you really want to dig in read this over https://software.intel.com/en-us/articles/intel-performance-counter-monitor/ If you want to experiment, have only 2 servers running, let it sit for a while, then log into 1 and see if you still have the issue with the edge that connects to the second server... then do it with 3, 4, 5, and then 6.
×