Internet Direct (Indy)
Home
PreviousUpNext
TIdCustomTCPServer Class

Specifies the interface for a multi-threaded TCP Server.

Pascal
TIdCustomTCPServer = class(TIdComponent);

TIdCustomTCPServer is a TIdComponent descendant that encapsulates a complete, multi-threaded TCP (Transmission Control Protocol) server. 

TIdCustomTCPServer allows multiple listener threads which listen for client connections to the server, accept new client connections, and hand-off execution of the client task to the server. By default, at least one listener thread is created for the server. For servers with multiple network adapters, additional socket Bindings can be created that allow the server to listen for connection requests using the IP address for the selected network adapter(s). 

When IP version 6 addresses are enabled for the server, an additional listener thread is created for each adapter specifically for connections using the IPv6 address family. 

TIdCustomTCPServer allows multiple simultaneous client connections, and allocates a separate unit of execution for each client connecting to the server. Each client connection represents a task that is managed by the Scheduler for the server. Listener threads use the Scheduler for the server to create an executable task for each client connection. 

The Scheduler handles creation, execution, and termination of tasks for client connections found in Contexts. The ContextClass property indicates the type of executable task created for client connections and added to Contexts

There are basically two types of Schedulers available for TIdCustomTCPServer: Thread-based and Fiber-based. Each is designed to work with a specific type of executable task that represents client connections to the server. There are further Scheduler refinements that allow a pool of pre-allocated Threads, or Threads which perform scheduling for dependent Fibers. 

The default Scheduler implementation in TIdCustomTCPServer uses a Thread to represent each client connection. Threads are a common feature found on all platforms and Operating Systems hosting the Indy library. 

Each platform and/or Operating System imposes certain performance and resource limitations on the number of threads that can be created for the server. 

On Windows, for example, the number of threads is limited by the available virtual memory. By default, every thread gets one megabyte of stack space and implies a theoretical limit of 2028 threads. Reducing the default stack size will allow more threads to be created, but will adversely impact system performance. In addition, threads are pre-emptively scheduled by the host operating system and allow no control over execution of the thread process. 

Windows addresses these issues by providing the Fibers API. Essentially, Fibers are a light-weight thread. Fibers require fewer resources that threads. Fibers can be manually scheduled in the server, and run in the context of the thread that schedules them. In other words, Fibers are not pre-emptively scheduled by the Operating System. The Fiber runs when its thread runs. As a result, servers using the Fiber API can be more scalable than contemporary thread-based implementations. 

The Fiber API is very different from the Thread API (even on the Windows platform). To hide these API differences, and to preserve code portability to non-Windows platforms, the Scheduler in TIdCustomTCPServer provides an abstraction that masks the differences: the Yarn. 

 

  Q: What do you get when you weave threads and fibers together?
  A: Yarn.

 

While TIdCustomTCPServer does not use Fibers unless a Scheduler supporting Fibers is assigned (TIdSchedulerOfFiber), the Yarn abstraction is an important one that is central to the use of the Scheduler. When the Scheduler supports Threads, it deals only with a Yarn at the Thread level. When the Scheduler supports Fibers, it deals with a Yarn at the Fiber level. 

The ContextClass property for the server is used to create the unit of execution for the client connection, and also uses the Yarn abstraction. 

TIdCustomTCPServer provides properties and methods that allow configuration of the server and listener threads, including: 

 

 

The TIdCustomTCPServer architecture provides properties and methods that allow controlling execution of the client connection tasks for the server, including: 

 

 

During initialization of the TIdCustomTCPServer component, the following resources are allocated for properties in the server: 

 

 

During initialization of the TIdCustomTCPServer component, the following properties are set to their default values: 

 

Property 
Value 
TIdContext class reference 
5000 (5 seconds) 

 

At run-time, TIdCustomTCPServer is controlled by changing the value in its Active property. When Active is set to True, the following actions are performed as required to start the server including: 

 

Startup Actions 
Bindings are created and / or initialized for IPv4 and IPv6 addresses. 
Ensures that an IOHandler is assigned for the server. 
Ensures that a Scheduler is assigned for the server. 
Creates and starts listener threads for Bindings using the thread priority tpHighest

 

When Active is set to False, the following actions are performed as required to stop the server including: 

 

Shutdown Actions 
Terminates and closes the socket handle for the any listener threads. 
Terminates and disconnects any client connections in Contexts, and Yarns in Scheduler
Frees an implicitly created Scheduler for the server. 

 

Changing the value in Active in the IDE (design-time) has no effect other than storing the property value that is used at run-time and during component streaming. 

While the server is Active, listener thread(s) are used to detect and accept client connection requests. When a connection request is detected, the Scheduler in the server instance is used to acquire the thread or fiber that controls execution of the task for the client connection. The IOHandler in the server is used to get an IOHandler for the client connection. 

A client connection request can be refused for any number of reasons. The listener thread may get stopped during processing of the request, or the attempt to create an IOHandler for the client might fail because of an invalid socket handle. 

The most common error occurs when the number of client connections in Contexts exceeds the number allowed in MaxConnections. In this situation, an message is written to the client connection that indicates the condition and the client connection is disconnected. 

If the connection is refused due to any other exception, the OnListenException (when assigned) is triggered. 

If no error or exception is detected for the connection request, a TIdContext instance is created for each executable task for the client connection. Procedures that trigger the OnConnect, OnDisconnect, and OnExecute event handlers are assigned to the corresponding event handlers in the client connection context. The context is then given to the Scheduler for execution using its native thread or fiber support. 

At this point, the executable task for the client connection is fully self-contained and controlled using the OnConnect, OnDisconnect, and OnExecute (for TIdTCPServer) event handlers. OnConnect and OnDisconnect are optional, and should be used only to perform simple tasks that are not time consuming. Using Synchronized method calls that access the main VCL thread are highly discouraged. Use OnExecute to control the interaction between the client and the server during the lifetime of the client connection. 

TIdCustomTCPServer provides support for the TCP transport, and does not implement support for any given protocol used for communication exchanges between clients and the server. TIdCustomTCPServer can be used as a base class to create custom TCP server descendants that support specific protocols. 

Use TIdCmdTCPServer for a TCP server that includes TIdCommandHandler and TIdCommand support.

Internet Direct (Indy) version 10.1.5
Copyright © 1993-2006, Chad Z. Hower (aka Kudzu) and the Indy Pit Crew. All rights reserved.
Website http://www.indyproject.org.
Post feedback to the Indy Documentation newsgroup.