Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

2 Neutral

About MeatShield

  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. If you want to use a 3x3, you need to run a dedicated server, 9 of them. The Redis database is how the dedicate servers talk to each other. I had 8 servers running on a i5 8259U with 32 GB of memory. There was enough headroom for 9 I think.
  2. Game.ini: GameUserSettings.ini: https://server.nitrado.net/usa/news2/view/atlas-major-update-5-1-bring-new-game-modes-new-settings-items-and-more/
  3. The Linux dedicated server files don't work. I'm running my dedicated server under a windows VM using http://www.phoenix125.com/AtlasServerUpdateUtil.html. The author is active on the forum and is responsive to bugs and other issues.
  4. @Phoenix125 Take a look at ProcessWaitClose() It blocks until it's done and it accepts a timeout parameter. You should be able to get rid of the spin wait, and the timer. (Sorry in advance, I don't have an AutoIT IDE, and have never touched it before.) Again, no rush.
  5. The keep alive program operates like some of the old school malware. You'd have 2 programs each checking to see if the other was running and restart it if it wasn't. This made it difficult for normal users to kill the process. That's basically what the keep alive program does. I added an exception for it.
  6. @Phoenix125 Awesome! Can we add one more bug? When you change the setting PlayerDefaultNoDiscoveriesMaxLevelUps it gets written to the game.ini file with a space before the equals sign. ex: PlayerDefaultNoDiscoveriesMaxLevelUps =35
  7. @Phoenix125 Yup, I have 2 physical servers each running half of the grids. The grids on the other servers are marked as remote, but I am polling them. I will disable Poll Remote Servers on one of my 2 and see if that makes an impact.... and 2 minutes later the one I disabled it on stopped growing.... I swapped and the growing swapped. So..... I think we know where the problem lies.... I turned them both to off and I'll let them run for a few hours to see. I had a thought about the hanging. I really don't know much about AutoIT but maybe there is a limit on how many handles it can have open and since they were not being closed it eventually ran out and doesn't handle it gracefully. I think everyone with the hanging issue also have the memory leak issue.
  8. @Phoenix125 Thanks for the response. Based on the log file it looks like you may not be freeing some memory if the rcon command fails. I'm running a 6x6, but only the center 4x4 are turned on normally. The outer ring are special grids with the power stones and whatnot. Plan is to turn one or 2 on when we're ready. so every time it tries to check the player count, 20 servers are off. Long story short the log file consists of the following every 3 minutes: I read over your source and I think the issue is lines 7725 through 7734 It looks like you're opening a handle to STDOUT and STDERR, but only reading STDOUT. StdoutRead() frees the resources for STDOUT, but the resources are never freed for STDERR. I'm guessing that STDERR is being populated when mcrcon.exe is sent SIGKILL from ProcessClose(). However, even if there weren't data in STDERR, just leaving the handle open would cause a slow memory leak. I think you want StdioClose() https://www.autoitscript.com/autoit3/docs/functions/StdioClose.htm I hope this helps.
  9. So that's on the high end so that's good. It's going to come down to population for you. For us we've got more grids than people so the majority sit idle. If you're in the same boat, you should be able to run 8 ish. However if you're going to have a good sized population you probably want to have at least 1 core per grid, so 4.
  10. @Angerrising I run a low pop server, so your mileage may vary... Without memory compression I'm averaging about 1.9 GB per grid, 2.7 GB high, 1.4 GB low. With compression they average about 675 MB per, 800 MB high, 500 MB low. The compression takes about a day to get settled. It doesn't seem to impact performance in a noticeable way. I'd bet your bottle neck would be CPU. Which i7 do you have, there's a huge difference between a i7-4770TE and a i7-4790K. I'm running 8 grids on a i5-8259U, and that's comparable to a high end i7 devils canyon. Runs without issues, but again, it's just a handful of people spread across all of the grids.
  11. Hey @Phoenix125 I really like the utility, I'm running it on 2 servers for my cluster. Couple questions/comments. I have also had the issue with the 2.1.2 64 bit client hanging. I'm able to kill it and restart it without loosing my servers, so no big deal. Next time it happens, what do you need me to collect to help you track it down. If it was linux, I'd attach strace to it... anything I can do under windows to help figure out what it's stuck on? Do you have plans to multi thread the rcon calls? Maybe let us a maximum number of workers too. It takes a while to churn through all of the grids and 99% of the time is waiting. Any chance you can modify the zero pop low cpu priority code to look at adjacent grids too. So a grid will go to low priority if it, and it's adjacent grids, have zero population (or are turned off). It would also be nice to be able to white list grids to always run at normal priority. It can be a little painful sailing to a new grid that is running at low priority when the population check only happens every 30 seconds. Also being able always run freeports as normal, and any grid that has permanent player bases too. When I start the utility, it takes up about 50 MB of memory. It's increasing at a rate of about 1 MB every few minutes. After a day or so it's around 3 GB. I'm running 2.1.2 x64. I'm hesitant to call it a memory leak, because some many people throw that word around without a firm grasp of what it is... but it really looks like a memory leak. Please let me know if I can help track it down. I don't mind restarting the program once a day or so. Thanks,
  12. The windows dedicated server works fine. It's annoying, but I setup a windows 10 VM and I'm running my servers out of it. Without memory compression I'm averaging about 1.9 GB per grid, 2.7 GB high, 1.4 GB low. With compression they average about 675 MB per, 800 MB high, 500 MB low. Once the linux dedicated server works again it should just be copying some files over. I wouldn't wait for the linux client if you want to host your own.
  13. Do you have NAT Reflection/Loopback/Hairpinning setup too? Can you use a steam link to the local IP and join that way (won't work for server transitions though) steam://connect/{IP}:{PORT}
  14. Will we have working Linux Dedicated servers again?
  • Create New...