Server administration and infrastructure services
No information system can do without system software. Our administrators worked with all Windows family of operating systems and many Linux and Unix systems. Configuring the disk subsystem, security settings, built-in FireWall. All servers must be updated from time to time. It is not difficult, when there are dozens of servers, but it becomes a significant task, when the server amount becomes several thousands and process windows to install updates are not enough. Also it is not easy to take into account the licenses for the operating system of several thousand servers, but we solve this task centrally.
It is easy to manage servers when you can "reach" them from any location. Therefore, we paid attention to the organization's remote servers, including remote power control. virtual servers are also easy job - they always have a remote control, but we had to equip the physical part of the equipment with the modules of the remote control (the rest already had built-ins for remote management).
Information systems often can't do without infrastructure services: Active Directory, various LDAP systems, DHCP, DNS, NTP. The work of the applications depend on the work of these services, so we have paid much attention to building fault-tolerant and disaster-proof architecture of these services so that a failure of the individual component are not even noticed by applications.
We had to organize the data exchange of different systems, and we have developed several standard solutions, which we try to follow: SFTP, SMB, WebDAV, NFS.
We also developed several standard solutions for providing safe access to the Internet: based on MS TMG and on the basis of the Squid.
We approach the issue of security very carefully and made a number of improvements for our Web-based resources available from the Internet to provide two-factor authentication. For example, for portals based on MS SharePoint, Citrix, Web-mail access. Technically, our implementation of two-factor authentication allows you to tie the second factor to almost any web application.
We also paid attention to the group printing systems.
It all works fine until complexity happens. When your equipment breaks down, there's no longer "everything is piece of cake". But this fact must be promptly detected. It is harder when the server is alive, but there are complaints about the systems that run on it. Here, the administrator begins to collect a bunch of settings of the server: CPU, memory, disk response, and more. And if the complaint is about a malfunction happened the night before last night - in most cases you can only give up. So we spent a lot of effort to develop a centralized monitoring system. The monitoring system may monitor components of various types, and each type in its own way. Server monitoring is a separate block of CMS. It saves the history of parameter measurements, so we can track the dynamics of changes in parameters. Centralized monitoring system reflects the hierarchy of the information system, where every intermediate node is monitored by its template. And in case of failure or warning threshold values violation, the hierarchy of the service is revealed up to "the failed" component, hiding the rest of the hierarchy of the service running at the moment - this allows us to detect the fact and the cause of the failure.
We can do all that. The basic principles we try to adhere to: standardization, automation, centralized monitoring. They allow us to quickly perform tasks and quickly detect failures, eliminating their effects.