In computer network terminology, load balancing or load balancing is a method of distributing tasks between several network devices (for example, servers ) in order to optimize resource use, reduce query servicing time, scale the cluster horizontally (dynamically add / remove devices), as well as ensuring fault tolerance ( redundancy ).
In computers, load balancing distributes the load across multiple computing resources, such as computers, computer clusters, networks, central processing units, or disks. The goal of load balancing is to optimize resource utilization, maximize throughput, reduce response time, and prevent overloading any one resource. Using multiple load balancing components instead of a single component can increase reliability and availability through redundancy. Load balancing usually involves special software or hardware, such as a tiered switch or domain name system, such as a server process.
Load balancing differs from a physical connection in that load balancing divides traffic between network interfaces into a network socket (OSI model 4) basis, while channel connection involves splitting traffic between physical interfaces at a lower level, or into a packet (OSI model level 3) or via a communication channel (OSI model level 2); Basics with how to connect a shortest path connection
Examples of devices to which balancing applies:
- Server clusters
- Proxies
- Firewalls
- Switches
- Content Inspection Servers
- DNS servers
- Network adapters
Load balancing can be used to expand the capabilities of a server farm consisting of more than one server. It can also allow you to continue working even in conditions when several executive devices (servers) are out of order. Thanks to this, fault tolerance grows, and it becomes possible to dynamically adjust the used computing resources by adding / removing executive devices in the cluster .
Internet Services
One of the most commonly used load balancing applications is to create a single Internet service with multiple servers , sometimes known as server farms . Typically, load-balanced systems include popular websites , large online stores , sites that use File Transfer Protocol (FTP), domain name systems (DNS), and databases.
For the Internet, a load balancer is usually a program that listens on the port where external clients connect to services. The load balancer redirects requests to one of the "servers" of the server, which usually responds to the load balancer. This allows the load balancer to respond to the client without knowing the internal separation of functions. It also allows customers to contact background servers directly, which can have security advantages and hide the structure of the internal network and prevent attacks on the kernel, network stack or unrelated services running on other ports.
Some load balancers provide a mechanism to do something special if the entire server side of the server is unavailable. This may include redirecting to a backup balancing system, or displaying a fault message.
It is also important that the load balancing subsystem should not become a single point of failure. Typically, load balancers are implemented in high availability , which can also replicate data persistence sessions if required for a particular application. [one]
DNS looping
An alternative method of load balancing, which does not necessarily require special host software or hardware, is called the Robin Round with DNS . In this technique, multiple IP addresses associated with the same domain name ; Clients must select a server to connect to. Unlike using a dedicated load balancer, this technique provides clients with multiple servers. The technique has its advantages and disadvantages, depending on the degree of control over DNS servers and the degree of detail of the required load.
Another, more efficient method for balancing the load using DNS can be delegated to www.example.org as a sub-domain for which the zone is served by the same servers that serve the website. This technique works especially well where individual servers are scattered geographically on the Internet. For example:
one.example.org A 192.0.2.1 two.example.org A 203.0.113.2 www.example.org NS one.example.org www.example.org NS two.example.org
However, the zone file for www.example.org on each server is different in such a way that each server decides how to use the IP address as an A record. On the server, a single zone file for reports www.example.org:
@ in a 192.0.2.1
On the server, two of the same zone file contains:
@ in a 203.0.113.2
Thus, when the first server is down, its DNS is not responding, and the web service does not receive any traffic. If the line on one server is congested, the unreliable DNS service provides less http traffic that reaches this server. In addition, the fastest DNS response to the resolver is almost always from the nearest server network, providing geo-sensitive load balancing . A short TTL on a-record allows traffic to be quickly distracted when the server crashes. You must consider the possibility that this technique can cause individual clients to switch between separate servers in the middle of a session.
Planning Algorithms
Many scheduling algorithms are used by load balancers to determine which server to send the request to. Simple algorithms include random selection or a circular system . More complex load balancing subsystems can take into account additional factors, such as servers reporting loads, shorter response times, up / down status (determined by monitoring a poll), the number of active connections, geographic location, capabilities, or how much traffic it has recently was appointed.
Perseverance
An important issue in the work of the load balancing service is how to work with information that must be stored through several requests in a user session. If this information is stored locally on one server server, then subsequent requests coming from various internal servers will not be able to find it. This can be cached information, which can be recounted, in which case the load balancing request to another server server solves the performance problem.
Ideally, the server cluster behind the load balancer should be aware of the sessions, so that if the client connects to any server at any time, the user's history of communication with a specific server does not matter. This is usually achieved using a shared database or in-memory database session, such as memcached .
One basic solution to the problem of session data is to send all requests in a user session sequentially to the same server. This is called persistence or stickiness . A significant drawback of this technology is the lack of automatic failover : if the server fails, its session information becomes unavailable, it is lost from all sessions. The same problem is usually relevant for a central server database; even if the web servers are stateless and not sticky, the central database (see below).
Assignment to a specific server may be based on a username, client IP address , or may be random. Due to client changes, the address arising from the DHCP protocol , the translation of network addresses and web proxies are perceived , this method may be unreliable. Random tasks should be remembered by the load balancer, which creates the load on the storage. If the load balancer is replaced or fails, this information may be lost, and tasks may need to be deleted after a specified period of time or during periods of high load to avoid exceeding the space available for the assignment table. The random assignment method also requires that clients support some settings, which can be a problem, for example, when the web browser has disabled cookie storage. Sophisticated load balancing subsystems use several storage methods to avoid some of the drawbacks of a single method.
Another solution is to store session data in the database . In general, this is bad for performance, since it increases the load on the database: it is better to use the database to store information that is less variable than session data. To prevent the database from becoming a single point of failure, and to increase scalability , databases are often replicated between multiple computers, and load balancing is used to distribute the load index between these replicas. Microsoft ASP.net State Server technology is an example of a database session. All servers in the web farm store session data on the Main State Server server, and any server in the farm can retrieve data.
Cases when the client is a web browser are very common, a simple, effective approach is to store session data in the browser itself. One way to achieve this is to use a browser cookie , temporary encrypted tags. Another way is to rewrite the URL. Storing session data on the client is usually the preferred solution: then the load balancer is free to choose any server to process the request. However, this method of processing state data is not suitable for some complex business logic scenarios where the session state is a large payload and it is not possible to re-read it with each request to the server. Rewriting URLs has serious security issues because the end-user can easily change the URLs presented and thus change session streams.
Another solution for storing persistent data is to associate a name with each data block, and use a distributed hash table to pseudo-randomly assign a name to one of the available servers, and then store this data block in a designated server.
Load balancer features
Hardware and software load balancers can have various special characteristics. The main feature of the load balancer is the ability to distribute incoming requests through several servers in the cluster according to the planning algorithm. Most of the following vendor-specific properties are:
- Asymmetric load: the ratio can be assigned to manually call some servers to get a larger share of the load than others. This is sometimes used as a way when one part of the server has more features than the others, and does not always work as desired.
- Activation priority: when the number of available servers falls below a certain number, or the load becomes too high, part of the backup servers can be brought online.
- SSL offloading and acceleration: depending on the workload, the processing of encryption and authentication for an SSL request can take up a significant part of processor time; since requests increase response time for users, since the SSL protocol increases the costs distributed between web servers. To remove this demand on web servers, the balancer can complete SSL connections by sending https requests, as well as http requests to web servers. If the balancer itself is not overloaded, this will not affect the performance perceived by end users. The disadvantage of this approach is that all SSL processing is concentrated on one device (balancer), which can become a new bottleneck. Some load balancing equipment includes specialized SSL processing equipment. Instead of upgrading the load balancer, which is quite expensive for specialized equipment, it may be cheaper to opt out of SSL offload and add multiple web servers. In addition, some server vendors, such as Oracle / Sun, now include hardware-based encryption acceleration in their processors, such as on the T2000 controller. The F5 network includes dedicated SSL hardware acceleration cards in their local traffic manager (LTM), which is used to encrypt and decrypt SSL traffic. One of the clear benefits for offloading SSL into balancing is that it allows balancing or switching content based on data in https requests.
- Distributed denial of service (DDoS) attacks protection: load balancers provide protection with Sin cookies and delayed response (background servers do not see the client until it confirms itself over TCP) to mitigate Sin flood attacks and offload servers to a more efficient a platform.
- Http compression: reduces the amount of data transmitted via http, using GZIP compression is supported by all modern browsers. The longer the response and the farther the customer, the more this feature can reduce response time. The trade-off is that this function adds extra load to the Balancer and removes it from the Web server.
- TCP offloading: different manufacturers use different terms for this, but the idea is that usually each http request from each client is a different TCP connection. This feature uses the HTTP / 1.1 protocol to combine multiple http requests from multiple clients into a single TCP socket with background servers.
- TCP buffering: a load balancer can hold a response buffer from the server and gradually respond to slow clients, allowing the web server to be free for other tasks, this is faster than sending the entire client request to the server directly.
- Direct Server Response: an option for asymmetric load balancing, where the request and response have different network paths.
- Health check: the balancer polls the servers for availability and removes inaccessible servers from the pool.
- Caching http: the balancer maintains static content so that some requests can be processed without accessing the servers.
- Content Filtering: some balancers can arbitrarily change their paths.
- Security http: some balancers can hide http page errors, remove server authentication headers from http responses, and encrypt cookies, so that end users cannot manipulate them.
- Queuing based on priorities : also known as traffic speed shaping , the ability to give different priorities for different transmissions.
- Content-dependent switching: most load balancers can send requests to different servers based on the URLs requested, assuming the request is not encrypted (http) or if it is encrypted (via https) that the https request is decrypted in the load balancer.
- Client Authentication: Authentication of users from various authentication sources before allowing them access to the site.
- Software Traffic Manipulation: At least one balancer allows you to use a scripting language to allow custom balancing methods, arbitrary traffic manipulation, and much more.
- Firewall : direct connections to the server have great features for the network. For security reasons, the balancer is used as a firewall (this is a set of rules that decide whether traffic is useful and which goes to the server backends or not.
- Intrusion Prevention System : offering application-level security in addition to the network / transport firewall security level.
Telecommunications Usage
ΠΠ°Π»Π°Π½ΡΠΈΡΠΎΠ²ΠΊΠ° Π½Π°Π³ΡΡΠ·ΠΊΠΈ ΠΌΠΎΠΆΠ΅Ρ ΠΎΠΊΠ°Π·Π°ΡΡΡΡ ΠΏΠΎΠ»Π΅Π·Π½ΡΠΌ Π² ΠΏΡΠΈΠ»ΠΎΠΆΠ΅Π½ΠΈΡΡ Ρ ΡΠ΅Π·Π΅ΡΠ²ΠΈΡΠΎΠ²Π°Π½ΠΈΠ΅ΠΌ ΠΊΠ°Π½Π°Π»ΠΎΠ² ΡΠ²ΡΠ·ΠΈ. ΠΠ°ΠΏΡΠΈΠΌΠ΅Ρ, ΠΊΠΎΠΌΠΏΠ°Π½ΠΈΡ ΠΌΠΎΠΆΠ΅Ρ ΠΈΠΌΠ΅ΡΡ Π½Π΅ΡΠΊΠΎΠ»ΡΠΊΠΎ ΠΏΠΎΠ΄ΠΊΠ»ΡΡΠ΅Π½ΠΈΠΉ ΠΊ ΠΠ½ΡΠ΅ΡΠ½Π΅ΡΡ, ΠΎΠ±Π΅ΡΠΏΠ΅ΡΠΈΠ²Π°Ρ Π΄ΠΎΡΡΡΠΏ ΠΊ ΡΠ΅ΡΠΈ, Π΅ΡΠ»ΠΈ ΠΎΠ΄Π½ΠΎ ΠΈΠ· ΡΠΎΠ΅Π΄ΠΈΠ½Π΅Π½ΠΈΠΉ Π½Π΅ ΡΠ°Π±ΠΎΡΠ°Π΅Ρ. Π ΠΎΡΠΊΠ°Π·ΠΎΡΡΡΠΎΠΉΡΠΈΠ²ΡΡ ΡΠΈΡΡΠ΅ΠΌΠ°Ρ Π±ΡΠ΄Π΅Ρ ΠΎΠ·Π½Π°ΡΠ°ΡΡ, ΡΡΠΎ ΠΎΠ΄Π½ΠΎ Π·Π²Π΅Π½ΠΎ ΠΏΡΠ΅Π΄Π½Π°Π·Π½Π°ΡΠ΅Π½ΠΎ Π΄Π»Ρ Π½ΠΎΡΠΌΠ°Π»ΡΠ½ΠΎΠ³ΠΎ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΡ, Π° Π²ΡΠΎΡΠΎΠ΅ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΡΡΡ ΡΠΎΠ»ΡΠΊΠΎ Π² ΡΠ»ΡΡΠ°Π΅, Π΅ΡΠ»ΠΈ ΠΎΡΠ½ΠΎΠ²Π½ΠΎΠΉ ΠΊΠ°Π½Π°Π» Π²ΡΠΉΠ΄Π΅Ρ ΠΈΠ· ΡΡΡΠΎΡ.
ΠΡΠΏΠΎΠ»ΡΠ·ΡΡ Π±Π°Π»Π°Π½ΡΠΈΡΠΎΠ²ΠΊΡ Π½Π°Π³ΡΡΠ·ΠΊΠΈ, ΠΎΠ±Π΅ ΡΡΡΠ»ΠΊΠΈ ΠΌΠΎΠ³ΡΡ Π±ΡΡΡ Π·Π°Π½ΡΡΡ Π²ΡΠ΅ Π²ΡΠ΅ΠΌΡ. Π£ΡΡΡΠΎΠΉΡΡΠ²ΠΎ ΠΈΠ»ΠΈ ΠΏΡΠΎΠ³ΡΠ°ΠΌΠΌΠ° ΠΊΠΎΠ½ΡΡΠΎΠ»ΠΈΡΡΠ΅Ρ Π½Π°Π»ΠΈΡΠΈΠ΅ Π²ΡΠ΅Ρ Π·Π²Π΅Π½ΡΠ΅Π² ΠΈ Π²ΡΠ±ΠΈΡΠ°Π΅Ρ ΠΏΡΡΡ Π΄Π»Ρ ΠΎΡΠΏΡΠ°Π²ΠΊΠΈ ΠΏΠ°ΠΊΠ΅ΡΠΎΠ². ΠΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅ Π½Π΅ΡΠΊΠΎΠ»ΡΠΊΠΈΡ ΡΡΡΠ»ΠΎΠΊ ΠΎΠ΄Π½ΠΎΠ²ΡΠ΅ΠΌΠ΅Π½Π½ΠΎ ΡΠ²Π΅Π»ΠΈΡΠΈΠ²Π°Π΅Ρ Π΄ΠΎΡΡΡΠΏΠ½ΡΡ ΠΏΠΎΠ»ΠΎΡΡ ΠΏΡΠΎΠΏΡΡΠΊΠ°Π½ΠΈΡ.
ΠΡΠ°ΡΡΠ°ΠΉΡΠΈΠΉ ΠΡΡΡ ΠΡΠ΅ΠΎΠ΄ΠΎΠ»Π΅Π½ΠΈΡ
Π‘ΡΠ°Π½Π΄Π°ΡΡ IEEE ΡΡΠ²Π΅ΡΠ΄ΠΈΠ» ΡΡΠ°Π½Π΄Π°ΡΡ IEEE 802.1 Ρ-Ρ ΡΡΠ°Π½Π΄Π°ΡΡΠ½ΡΠΉ ΠΌΠ°Ρ 2012 Π³ΠΎΠ΄Π° [2] ΡΠ°ΠΊΠΆΠ΅ ΠΈΠ·Π²Π΅ΡΡΠ½ΠΎ ΠΈ Π·Π°Π΄ΠΎΠΊΡΠΌΠ΅Π½ΡΠΈΡΠΎΠ²Π°Π½ΠΎ Π² Π±ΠΎΠ»ΡΡΠΈΠ½ΡΡΠ²Π΅ ΠΊΠ½ΠΈΠ³ Π² ΠΊΠ°ΡΠ΅ΡΡΠ²Π΅ ΠΡΠ°ΡΡΠ°ΠΉΡΠ΅Π³ΠΎ ΠΏΡΡΠΈ ΠΏΡΠ΅ΠΎΠ΄ΠΎΠ»Π΅Π½ΠΈΡ (ΠΠΠ) . ΠΠΠ ΠΏΠΎΠ·Π²ΠΎΠ»ΡΠ΅Ρ Π²ΡΠ΅ΠΌ ΡΡΡΠ»ΠΊΠ°ΠΌ, ΡΡΠΎΠ±Ρ Π±ΡΡΡ Π°ΠΊΡΠΈΠ²Π½ΡΠΌΠΈ ΡΠ΅ΡΠ΅Π· Π½Π΅ΡΠΊΠΎΠ»ΡΠΊΠΎ ΠΏΡΡΠ΅ΠΉ ΡΠ°Π²Π½ΠΎΠΉ Π·Π½Π°ΡΠΈΠΌΠΎΡΡΠΈ, ΠΎΠ±Π΅ΡΠΏΠ΅ΡΠΈΠ²Π°Π΅Ρ Π±ΠΎΠ»Π΅Π΅ Π±ΡΡΡΡΡΡ ΡΡ ΠΎΠ΄ΠΈΠΌΠΎΡΡΡ ΡΠΎΠΊΡΠ°ΡΠ°Ρ Π²ΡΠ΅ΠΌΡ ΠΏΡΠΎΡΡΠΎΡ ΠΈ ΡΠΏΡΠΎΡΠ°Π΅Ρ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΠ΅ Π±Π°Π»Π°Π½ΡΠΈΡΠΎΠ²ΠΊΠΈ Π½Π°Π³ΡΡΠ·ΠΊΠΈ Π² ΡΠ΅ΡΠΈ ΡΡΠ΅ΠΈΡΡΠΎΠΉ ΡΠΎΠΏΠΎΠ»ΠΎΠ³ΠΈΠΈ (ΡΠ°ΡΡΠΈΡΠ½ΠΎ ΠΈ/ΠΈΠ»ΠΈ ΠΏΠΎΠ»Π½ΠΎΡΡΡΡ ΠΏΠΎΠ΄ΠΊΠ»ΡΡΠ΅Π½ΠΎΠΉ), ΠΏΠΎΠ·Π²ΠΎΠ»ΡΡ ΡΡΠ°ΡΠΈΠΊΡ ΡΠ°ΡΠΏΡΠ΅Π΄Π΅Π»ΡΡΡ Π½Π°Π³ΡΡΠ·ΠΊΡ Π΄Π»Ρ Π²ΡΠ΅Ρ ΠΏΡΡΠ΅ΠΉ ΡΠ΅ΡΠΈ. [3] [4] ΠΠΠ ΠΏΡΠΈΠ·Π²Π°Π½Π° ΠΏΡΠ°ΠΊΡΠΈΡΠ΅ΡΠΊΠΈ ΠΈΡΠΊΠ»ΡΡΠΈΡΡ Π²Π»ΠΈΡΠ½ΠΈΠ΅ ΡΠ΅Π»ΠΎΠ²Π΅ΡΠ΅ΡΠΊΠΎΠ³ΠΎ ΡΠ°ΠΊΡΠΎΡΠ° Π² ΠΏΡΠΎΡΠ΅ΡΡΠ΅ Π½Π°ΡΡΡΠΎΠΉΠΊΠΈ ΠΈ ΡΠΎΡ ΡΠ°Π½ΡΠ΅Ρ ΠΏΡΠΈΡΠΎΠ΄Ρ Β«plug-and-playΒ» ΠΏΠΎΠ΄ΠΊΠ»ΡΡΠΈ-ΠΈ-ΠΈΠ³ΡΠ°ΠΉ, ΡΡΠΎ ΡΠΎΠ·Π΄Π°ΡΡ Ethernet Π² ΠΊΠ°ΡΠ΅ΡΡΠ²Π΅ Π΄Π΅-ΡΠ°ΠΊΡΠΎ ΠΏΡΠΎΡΠΎΠΊΠΎΠ»Π° Π²ΠΎ Π²ΡΠΎΡΠΎΠΌ ΡΠ»ΠΎΠ΅. [five]
ΠΠ°ΡΡΡΡΡΠΈΠ·Π°ΡΠΈΡ
ΠΠ½ΠΎΠ³ΠΈΠ΅ ΡΠ΅Π»Π΅ΠΊΠΎΠΌΠΌΡΠ½ΠΈΠΊΠ°ΡΠΈΠΎΠ½Π½ΡΠ΅ ΠΊΠΎΠΌΠΏΠ°Π½ΠΈΠΈ ΠΈΠΌΠ΅ΡΡ Π½Π΅ΡΠΊΠΎΠ»ΡΠΊΠΎ ΠΌΠ°ΡΡΡΡΡΠΎΠ² ΡΠ΅ΡΠ΅Π· ΡΠ²ΠΎΠΈ ΡΠ΅ΡΠΈ ΠΈΠ»ΠΈ ΠΊ Π²Π½Π΅ΡΠ½ΠΈΠΌ ΡΠ΅ΡΡΠΌ. ΠΠ½ΠΈ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΡΡ ΡΠ»ΠΎΠΆΠ½ΡΠ΅ Π½Π°Π³ΡΡΠ·ΠΊΠΈ Π΄Π»Ρ ΡΠΌΠ΅Π½Ρ Π΄Π²ΠΈΠΆΠ΅Π½ΠΈΡ Ρ ΠΎΠ΄Π½ΠΎΠ³ΠΎ ΠΏΡΡΠΈ Π½Π° Π΄ΡΡΠ³ΠΎΠΉ, ΡΡΠΎΠ±Ρ ΠΈΠ·Π±Π΅ΠΆΠ°ΡΡ ΠΏΠ΅ΡΠ΅Π³ΡΡΠ·ΠΊΠΈ ΡΠ΅ΡΠΈ Π½Π° ΠΊΠ°ΠΊΠΎΠΉ-Π»ΠΈΠ±ΠΎ ΠΊΠΎΠ½ΠΊΡΠ΅ΡΠ½ΠΎΠΉ ΡΡΡΠ»ΠΊΠ΅, Π° ΠΈΠ½ΠΎΠ³Π΄Π° ΠΈ ΡΠ²Π΅ΡΡΠΈ ΠΊ ΠΌΠΈΠ½ΠΈΠΌΡΠΌΡ ΡΡΠΎΠΈΠΌΠΎΡΡΡ ΡΡΠ°Π½Π·ΠΈΡΠ½ΡΡ ΠΏΠ΅ΡΠ΅Π²ΠΎΠ·ΠΎΠΊ ΡΠ΅ΡΠ΅Π· Π²Π½Π΅ΡΠ½ΠΈΠ΅ ΡΠ΅ΡΠΈ ΠΈΠ»ΠΈ ΡΠ»ΡΡΡΠ΅Π½ΠΈΠ΅ Π½Π°Π΄Π΅ΠΆΠ½ΠΎΡΡΠΈ ΡΠ΅ΡΠΈ.
ΠΡΡΠ³ΠΎΠΉ ΡΠΏΠΎΡΠΎΠ± ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½ΠΈΡ Π±Π°Π»Π°Π½ΡΠΈΡΠΎΠ²ΠΊΠΈ Π½Π°Π³ΡΡΠ·ΠΊΠΈ Π² ΡΠ΅ΡΠΈ Ρ ΠΌΠΎΠ½ΠΈΡΠΎΡΠΈΠ½Π³ΠΎΠΌ Π΄Π΅ΡΡΠ΅Π»ΡΠ½ΠΎΡΡΠΈ. ΠΠ°Π»Π°Π½ΡΠΈΡΠΎΠ²ΡΠΈΠΊΠΈ Π½Π°Π³ΡΡΠ·ΠΊΠΈ ΠΌΠΎΠ³ΡΡ Π±ΡΡΡ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°Π½Ρ Π΄Π»Ρ ΡΠ°Π·Π±ΠΈΠ΅Π½ΠΈΡ ΠΎΠ³ΡΠΎΠΌΠ½ΡΡ ΠΏΠΎΡΠΎΠΊΠΎΠ² Π΄Π°Π½Π½ΡΡ Π½Π° Π½Π΅ΡΠΊΠΎΠ»ΡΠΊΠΎ ΡΡΠ±-ΠΏΠΎΡΠΎΠΊΠΎΠ² ΠΈ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ Π½Π΅ΡΠΊΠΎΠ»ΡΠΊΠΎ ΡΠ΅ΡΠ΅Π²ΡΡ Π°Π½Π°Π»ΠΈΠ·Π°ΡΠΎΡΠΎΠ², Π³Π΄Π΅ ΠΊΠ°ΠΆΠ΄ΡΠΉ ΡΠΈΡΠ°Π΅Ρ ΡΠ°ΡΡΡ ΠΈΡΡ ΠΎΠ΄Π½ΡΡ Π΄Π°Π½Π½ΡΡ . ΠΡΠΎ ΠΎΡΠ΅Π½Ρ ΠΏΠΎΠ»Π΅Π·Π½ΠΎ Π΄Π»Ρ ΠΌΠΎΠ½ΠΈΡΠΎΡΠΈΠ½Π³Π° Π±ΡΡΡΡΡΡ ΡΠ΅ΡΠ΅ΠΉ, ΡΠ°ΠΊΠΈΡ ΠΊΠ°ΠΊ ΠΏΠΎΡΡΠ° 10gbe ΠΈΠ»ΠΈ STM64, Π³Π΄Π΅ ΠΊΠΎΠΌΠΏΠ»Π΅ΠΊΡ ΠΎΠ±ΡΠ°Π±ΠΎΡΠΊΠΈ Π΄Π°Π½Π½ΡΡ ΠΌΠΎΠΆΠ΅Ρ Π±ΡΡΡ Π½Π΅Π²ΠΎΠ·ΠΌΠΎΠΆΠ΅Π½ Π½Π° ΡΠΊΠΎΡΠΎΡΡΠΈ ΠΏΡΠΎΠ²ΠΎΠ΄Π½ΠΎΠ³ΠΎ ΡΠΎΠ΅Π΄ΠΈΠ½Π΅ΠΈΡ.
ΠΡΠ½ΠΎΡΠ΅Π½ΠΈΠ΅ ΠΊ ΠΎΡΠΊΠ°Π·ΠΎΡΡΡΠΎΠΉΡΠΈΠ²ΠΎΡΡΠΈ
ΠΠ°Π»Π°Π½ΡΠΈΡΠΎΠ²ΠΊΠ° Π½Π°Π³ΡΡΠ·ΠΊΠΈ ΡΠ°ΡΡΠΎ ΠΈΡΠΏΠΎΠ»ΡΠ·ΡΠ΅ΡΡΡ Π΄Π»Ρ ΡΠ΅Π°Π»ΠΈΠ·Π°ΡΠΈΠΈ ΠΎΡΠΊΠ°Π·ΠΎΡΡΡΠΎΠΉΡΠΈΠ²ΠΎΡΡΠΈ βΠΏΡΠΎΠ΄ΠΎΠ»ΠΆΠ΅Π½ΠΈΠ΅ ΡΠ°Π±ΠΎΡΡ ΡΠ»ΡΠΆΠ±Ρ ΠΏΠΎΡΠ»Π΅ ΠΎΡΠΊΠ°Π·Π° ΠΎΠ΄Π½ΠΎΠ³ΠΎ ΠΈΠ»ΠΈ Π½Π΅ΡΠΊΠΎΠ»ΡΠΊΠΈΡ Π΅Ρ ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½ΡΠΎΠ². ΠΠΎΠΌΠΏΠΎΠ½Π΅Π½ΡΡ ΠΊΠΎΠ½ΡΡΠΎΠ»ΠΈΡΡΡΡΡΡ ΠΏΠΎΡΡΠΎΡΠ½Π½ΠΎ (Π½Π°ΠΏΡΠΈΠΌΠ΅Ρ, Π²Π΅Π±-ΡΠ΅ΡΠ²Π΅ΡΡ ΠΌΠΎΠ³ΡΡ ΠΊΠΎΠ½ΡΡΠΎΠ»ΠΈΡΠΎΠ²Π°ΡΡΡΡ Π²ΡΠ±ΠΎΡΠΊΠΎΠΉ ΠΈΠ·Π²Π΅ΡΡΠ½ΡΡ ΡΡΡΠ°Π½ΠΈΡ), ΠΈ ΠΊΠΎΠ³Π΄Π° ΠΎΠ΄ΠΈΠ½ ΠΏΠ΅ΡΠ΅ΡΡΠ°Π½Π΅Ρ ΡΠ΅Π°Π³ΠΈΡΠΎΠ²Π°ΡΡ, Π±Π°Π»Π°Π½ΡΠΈΡΠΎΠ²ΡΠΈΠΊ Π½Π°Π³ΡΡΠ·ΠΊΠΈ ΠΈΠ½ΡΠΎΡΠΌΠΈΡΡΠ΅ΡΡΡ ΠΈ Π±ΠΎΠ»ΡΡΠ΅ Π½Π΅ ΡΠ»Π΅Ρ ΡΡΠ°ΡΠΈΠΊ Π½Π° ΡΡΠΎΡ ΡΠ΅ΡΠ²Π΅Ρ. ΠΠΎΠ³Π΄Π° ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ Π²ΠΎΠ·Π²ΡΠ°ΡΠ°Π΅ΡΡΡ Π½Π° Π»ΠΈΠ½ΠΈΡ, Π±Π°Π»Π°Π½ΡΠΈΡΠΎΠ²ΡΠΈΠΊ Π½Π°Π³ΡΡΠ·ΠΊΠΈ Π½Π°ΡΠΈΠ½Π°Π΅Ρ ΠΌΠ°ΡΡΡΡΡΠΈΠ·ΠΈΡΠΎΠ²Π°ΡΡ ΡΡΠ°ΡΠΈΠΊ ΠΊ Π½Π΅ΠΌΡ ΡΠ½ΠΎΠ²Π°. ΠΠ»Ρ ΡΠΎΠ³ΠΎ ΡΡΠΎΠ±Ρ ΡΡΠΎ ΡΠ°Π±ΠΎΡΠ°Π»ΠΎ, Π΄ΠΎΠ»ΠΆΠ΅Π½ Π±ΡΡΡ ΠΏΠΎ ΠΊΡΠ°ΠΉΠ½Π΅ΠΉ ΠΌΠ΅ΡΠ΅ ΠΎΠ΄ΠΈΠ½ ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ Π² ΠΊΠΎΠ»ΠΈΡΠ΅ΡΡΠ²Π΅, ΠΏΡΠ΅Π²ΡΡΠ°ΡΡΠ΅ΠΌ Π΅ΠΌΠΊΠΎΡΡΡ ΡΠ»ΡΠΆΠ±Ρ (N+1 ΡΠ΅Π·Π΅ΡΠ²ΠΈΡΠΎΠ²Π°Π½ΠΈΠ΅). ΠΡΠΎ Π³ΠΎΡΠ°Π·Π΄ΠΎ Π΄Π΅ΡΠ΅Π²Π»Π΅ ΠΈ Π±ΠΎΠ»Π΅Π΅ Π³ΠΈΠ±ΠΊΠΈΠΌ, ΡΠ΅ΠΌ ΠΎΡΠΊΠ°Π·ΠΎΡΡΡΠΎΠΉΡΠΈΠ²ΠΎΠ³ΠΎ ΠΏΠΎΠ΄Ρ ΠΎΠ΄Ρ Π² ΠΊΠΎΡΠΎΡΠΎΠΌ ΠΊΠ°ΠΆΠ΄ΡΠΉ ΠΆΠΈΠ²ΠΎΠΉ ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½Ρ ΡΠ°Π±ΠΎΡΠ°Π΅Ρ Π² ΠΏΠ°ΡΠ΅ Ρ Π΅Π΄ΠΈΠ½ΠΎΠΉ ΡΠ΅Π·Π΅ΡΠ²Π½ΠΎΠΉ ΠΊΠΎΠΏΠΈΠΈ ΠΊΠΎΠΌΠΏΠΎΠ½Π΅Π½ΡΠ°, ΠΊΠΎΡΠΎΡΡΠΉ Π±Π΅ΡΠ΅Ρ Π½Π° ΡΠ΅Π±Ρ Π² ΡΠ»ΡΡΠ°Π΅ Π²ΡΡ ΠΎΠ΄Π° ΠΈΠ· ΡΡΡΠΎΡ (Π΄Π²ΠΎΠΉΠ½ΠΎΠ΅ ΠΌΠΎΠ΄ΡΠ»ΡΠ½ΠΎΠ΅ ΡΠ΅Π·Π΅ΡΠ²ΠΈΡΠΎΠ²Π°Π½ΠΈΠ΅). ΠΠ΅ΠΊΠΎΡΠΎΡΡΠ΅ ΡΠΈΠΏΡ RAID ΡΠΈΡΡΠ΅ΠΌ ΠΌΠΎΠΆΠ½ΠΎ ΡΠ°ΠΊΠΆΠ΅ ΠΈΡΠΏΠΎΠ»ΡΠ·ΠΎΠ²Π°ΡΡ Π΄Π»Ρ Π³ΠΎΡΡΡΠΈΠΉ ΡΠ΅Π·Π΅ΡΠ²ΠΈΡΠΎΠ²Π°Π½ΠΈΡ Π΄Π»Ρ ΠΏΠΎΠ΄ΠΎΠ±Π½ΠΎΠ³ΠΎ ΡΡΡΠ΅ΠΊΡΠ°.
Links
- Π‘ΠΈΡΡΠ΅ΠΌΡ Π±Π°Π»Π°Π½ΡΠΈΡΠΎΠ²ΠΊΠΈ Π½Π°Π³ΡΡΠ·ΠΊΠΈ Web-ΡΠ΅ΡΠ²Π΅ΡΠΎΠ²
- Habrahabr.ru β ΠΠ΅ΡΠ΅Ρ ΠΎΠ΄ Π½Π° Percona XtraDB Cluster. Π§Π°ΡΡΡ I. ΠΠ΄Π½Π° ΠΈΠ· Π²ΠΎΠ·ΠΌΠΎΠΆΠ½ΡΡ ΠΊΠΎΠ½ΡΠΈΠ³ΡΡΠ°ΡΠΈΠΉ
- β Β«High AvailabilityΒ» . linuxvirtualserver.org .
- β Shuang Yu (8 May 2012).
- β Peter Ashwood-Smith (24 Feb 2011).
- β Jim Duffy (11 May 2012).
- β Β«IEEE Approves New IEEE 802.1aq Shortest Path Bridging StandardΒ» .