Struct cloudfiles::HadoopDFS
Properties Methods Events Config Settings Errors
The HadoopDFS struct provides easy access to files stored in HDFS clusters.
Syntax
cloudfiles::HadoopDFS
Remarks
The HadoopDFS struct offers an easy-to-use API compatible with any Hadoop distributed file system (HDFS) cluster that exposes Hadoop's standard WebHDFS REST API. Capabilities include uploading and downloading files, strong encryption support, creating folders, file manipulation and organization, and more.
Authentication
First, set the url property to the base WebHDFS URL of the server (see url for more details).
Depending on how the server is configured, there are a few different authentication mechanisms that might be used; or, the server might not require authentication at all). Refer to the auth_mechanism property for more information about configuring the struct to authenticate correctly.
Addressing Resources
HDFS addresses resources (files, directories, and symlinks) using Linux-style absolute paths. Unless otherwise specified, the struct always works in terms of absolute paths, and will always prepend a forward slash (/) to any path passed to it that does not already start with one.
Listing Directory Contents
list_resources lists resources (files, directories, and symlinks) within the specified directory. Calling this method will fire the on_resource_list event once for each resource, and will also populate the resources properties.
// ResourceList event handler.
hdfs.OnResourceList += (s, e) => {
Console.WriteLine(e.Name);
};
hdfs.ListResources("/work_files/serious_business/cats");
for (int i = 0; i < hdfs.Resources.Count; i++) {
// Process resources here.
}
Downloading Files
The download_file method downloads files.
If local_file is set, the file will be saved to the specified location; otherwise, the file data will be held by resource_data.
To download and decrypt an encrypted file, set encryption_algorithm and encryption_password before calling this method.
Download Notes
In the simplest use-case, downloading a file looks like this:
hdfs.LocalFile = "../MyFile.zip";
hdfs.DownloadFile(hdfs.Resources[0].Path);
Resuming Downloads
The struct also supports resuming failed downloads by using the start_byte property. If a download is interrupted, set start_byte to the appropriate offset before calling this method to resume the download.
string downloadFile = "../MyFile.zip";
hdfs.LocalFile = downloadFile;
hdfs.DownloadFile(hdfs.Resources[0].Path);
//The transfer is interrupted and DownloadFile() above fails. Later, resume the download:
//Get the size of the partially downloaded file
hdfs.StartByte = new FileInfo(downloadFile).Length;
hdfs.DownloadFile(hdfs.Resources[0].Path);
Resuming Encrypted File Downloads
Resuming encrypted file downloads is only supported when local_file was set in the initial download attempt.
If local_file is set when beginning an encrypted download, the struct creates a temporary file in TempPath to hold the encrypted data until the download is complete. If the download is interrupted, DownloadTempFile will be populated with the path of the temporary file that holds the partial data.
To resume, DownloadTempFile must be populated, along with start_byte, to allow the remainder of the encrypted data to be downloaded. Once the encrypted data is downloaded it will be decrypted and written to local_file.
hdfs.LocalFile = "../MyFile.zip";
hdfs.EncryptionPassword = "password";
hdfs.DownloadFile(hdfs.Resources[0].Path);
//The transfer is interrupted and DownloadFile() above fails. Later, resume the download:
//Get the size of the partially download temp file
hdfs.StartByte = new FileInfo(hdfs.Config("DownloadTempFile")).Length;
hdfs.DownloadFile(hdfs.Resources[0].Path);
Uploading Files
The upload_file method uploads new files.
If local_file is set the file will be uploaded from the specified path. If local_file is not set the data in resource_data will be used.
To encrypt the file before uploading it, set encryption_algorithm and encryption_password.
hdfs.LocalFile = "../MyFile.zip";
hdfs.UploadFile("/MyFile.zip");
Additional Functionality
The HadoopDFS struct offers advanced functionality beyond simple uploads and downloads. For instance:
- Encrypt and decrypt files using the encryption_algorithm and encryption_password properties.
- Basic file and folder manipulation and organization using methods such as append_file, delete_resource, make_directory, move_resource, and truncate_file.
- Advanced file and directory manipulation with set_file_replication, set_owner, set_permission, and set_times.
- Retrieval of both general file/directory information, as well as directory quota information, using get_resource_info and get_dir_summary.
- Execute any arbitrary WebHDFS operation with ease using the do_custom_op method.
- And more!
Object Lifetime
The new() method returns a mutable reference to a struct instance. The object itself is kept in the global list maintainted by CloudFiles. Due to this, the HadoopDFS struct cannot be disposed of automatically. Please, call the dispose(&mut self) method of HadoopDFS when you have finished using the instance.
Property List
The following is the full list of the properties of the struct with short descriptions. Click on the links for further details.
| auth_mechanism | The authentication mechanism to use when connecting to the server. |
| authorization | The OAuth 2.0 authorization token. |
| dir_summary_dir_count | The number of subdirectories within the directory. |
| dir_summary_file_count | The number of files within the directory. |
| dir_summary_name_quota | The name quota imposed on the directory. |
| dir_summary_size | The total size of the directory contents, excluding file replicas. |
| dir_summary_space_quota | The space quota imposed on the directory. |
| dir_summary_space_used | The total amount of space the directory consumes on disk. |
| dir_summary_storage_quota | The storage type quota imposed on the directory. |
| dir_summary_storage_quota_count | The number of storage type quotas associated with the directory. |
| dir_summary_storage_quota_index | Selects the storage type quota to show information for. |
| dir_summary_storage_quota_type | The storage type associated with the storage type quota. |
| dir_summary_storage_quota_used | The number of bytes consumed for the storage type quota. |
| encryption_algorithm | The encryption algorithm. |
| encryption_password | The encryption password. |
| firewall_auto_detect | Whether to automatically detect and use firewall system settings, if available. |
| firewall_type | The type of firewall to connect through. |
| firewall_host | The name or IP address of the firewall (optional). |
| firewall_password | A password if authentication is to be used when connecting through the firewall. |
| firewall_port | The Transmission Control Protocol (TCP) port for the firewall Host . |
| firewall_user | A username if authentication is to be used when connecting through a firewall. |
| idle | The current status of the struct. |
| local_file | The location of the local file. |
| local_host | The name of the local host or user-assigned IP interface through which connections are initiated or accepted. |
| other_headers | Other headers as determined by the user (optional). |
| overwrite | Whether to overwrite the local or remote file. |
| parsed_header_count | The number of records in the ParsedHeader arrays. |
| parsed_header_field | This property contains the name of the HTTP header (this is the same case as it is delivered). |
| parsed_header_value | This property contains the header contents. |
| password | The password to use for authentication. |
| proxy_auth_scheme | The type of authorization to perform when connecting to the proxy. |
| proxy_auto_detect | Whether to automatically detect and use proxy system settings, if available. |
| proxy_password | A password if authentication is to be used for the proxy. |
| proxy_port | The Transmission Control Protocol (TCP) port for the proxy Server (default 80). |
| proxy_server | If a proxy Server is given, then the HTTP request is sent to the proxy instead of the server otherwise specified. |
| proxy_ssl | When to use a Secure Sockets Layer (SSL) for the connection to the proxy. |
| proxy_user | A username if authentication is to be used for the proxy. |
| query_param_count | The number of records in the QueryParam arrays. |
| query_param_name | The name of the query parameter. |
| query_param_value | The value of the query parameter. |
| read_bytes | The number of bytes to read when downloading a file. |
| resource_data | The data that was downloaded, or that should be uploaded. |
| resource_count | The number of records in the Resource arrays. |
| resource_access_time | The last access time of the resource. |
| resource_block_size | The block size of the file. |
| resource_child_count | The number of children in the directory. |
| resource_group | The name of the resource's group. |
| resource_modified_time | The last modified time of the resource. |
| resource_name | The name of the resource. |
| resource_owner | The name of the resource's owner. |
| resource_path | The full path of the resource. |
| resource_permission | The resource's permission bits. |
| resource_replication | The replication factor of the file. |
| resource_size | The size of the file. |
| resource_symlink_target | The full target path of the symlink. |
| resource_type | The resource type. |
| ssl_accept_server_cert_effective_date | The date on which this certificate becomes valid. |
| ssl_accept_server_cert_expiration_date | The date on which the certificate expires. |
| ssl_accept_server_cert_extended_key_usage | A comma-delimited list of extended key usage identifiers. |
| ssl_accept_server_cert_fingerprint | The hex-encoded, 16-byte MD5 fingerprint of the certificate. |
| ssl_accept_server_cert_fingerprint_sha1 | The hex-encoded, 20-byte SHA-1 fingerprint of the certificate. |
| ssl_accept_server_cert_fingerprint_sha256 | The hex-encoded, 32-byte SHA-256 fingerprint of the certificate. |
| ssl_accept_server_cert_issuer | The issuer of the certificate. |
| ssl_accept_server_cert_private_key | The private key of the certificate (if available). |
| ssl_accept_server_cert_private_key_available | Whether a PrivateKey is available for the selected certificate. |
| ssl_accept_server_cert_private_key_container | The name of the PrivateKey container for the certificate (if available). |
| ssl_accept_server_cert_public_key | The public key of the certificate. |
| ssl_accept_server_cert_public_key_algorithm | The textual description of the certificate's public key algorithm. |
| ssl_accept_server_cert_public_key_length | The length of the certificate's public key (in bits). |
| ssl_accept_server_cert_serial_number | The serial number of the certificate encoded as a string. |
| ssl_accept_server_cert_signature_algorithm | The text description of the certificate's signature algorithm. |
| ssl_accept_server_cert_store | The name of the certificate store for the client certificate. |
| ssl_accept_server_cert_store_password | If the type of certificate store requires a password, this property is used to specify the password needed to open the certificate store. |
| ssl_accept_server_cert_store_type | The type of certificate store for this certificate. |
| ssl_accept_server_cert_subject_alt_names | Comma-separated lists of alternative subject names for the certificate. |
| ssl_accept_server_cert_thumbprint_md5 | The MD5 hash of the certificate. |
| ssl_accept_server_cert_thumbprint_sha1 | The SHA-1 hash of the certificate. |
| ssl_accept_server_cert_thumbprint_sha256 | The SHA-256 hash of the certificate. |
| ssl_accept_server_cert_usage | The text description of UsageFlags . |
| ssl_accept_server_cert_usage_flags | The flags that show intended use for the certificate. |
| ssl_accept_server_cert_version | The certificate's version number. |
| ssl_accept_server_cert_subject | The subject of the certificate used for client authentication. |
| ssl_accept_server_cert_encoded | The certificate (PEM/Base64 encoded). |
| ssl_cert_effective_date | The date on which this certificate becomes valid. |
| ssl_cert_expiration_date | The date on which the certificate expires. |
| ssl_cert_extended_key_usage | A comma-delimited list of extended key usage identifiers. |
| ssl_cert_fingerprint | The hex-encoded, 16-byte MD5 fingerprint of the certificate. |
| ssl_cert_fingerprint_sha1 | The hex-encoded, 20-byte SHA-1 fingerprint of the certificate. |
| ssl_cert_fingerprint_sha256 | The hex-encoded, 32-byte SHA-256 fingerprint of the certificate. |
| ssl_cert_issuer | The issuer of the certificate. |
| ssl_cert_private_key | The private key of the certificate (if available). |
| ssl_cert_private_key_available | Whether a PrivateKey is available for the selected certificate. |
| ssl_cert_private_key_container | The name of the PrivateKey container for the certificate (if available). |
| ssl_cert_public_key | The public key of the certificate. |
| ssl_cert_public_key_algorithm | The textual description of the certificate's public key algorithm. |
| ssl_cert_public_key_length | The length of the certificate's public key (in bits). |
| ssl_cert_serial_number | The serial number of the certificate encoded as a string. |
| ssl_cert_signature_algorithm | The text description of the certificate's signature algorithm. |
| ssl_cert_store | The name of the certificate store for the client certificate. |
| ssl_cert_store_password | If the type of certificate store requires a password, this property is used to specify the password needed to open the certificate store. |
| ssl_cert_store_type | The type of certificate store for this certificate. |
| ssl_cert_subject_alt_names | Comma-separated lists of alternative subject names for the certificate. |
| ssl_cert_thumbprint_md5 | The MD5 hash of the certificate. |
| ssl_cert_thumbprint_sha1 | The SHA-1 hash of the certificate. |
| ssl_cert_thumbprint_sha256 | The SHA-256 hash of the certificate. |
| ssl_cert_usage | The text description of UsageFlags . |
| ssl_cert_usage_flags | The flags that show intended use for the certificate. |
| ssl_cert_version | The certificate's version number. |
| ssl_cert_subject | The subject of the certificate used for client authentication. |
| ssl_cert_encoded | The certificate (PEM/Base64 encoded). |
| ssl_provider | The Secure Sockets Layer/Transport Layer Security (SSL/TLS) implementation to use. |
| ssl_server_cert_effective_date | The date on which this certificate becomes valid. |
| ssl_server_cert_expiration_date | The date on which the certificate expires. |
| ssl_server_cert_extended_key_usage | A comma-delimited list of extended key usage identifiers. |
| ssl_server_cert_fingerprint | The hex-encoded, 16-byte MD5 fingerprint of the certificate. |
| ssl_server_cert_fingerprint_sha1 | The hex-encoded, 20-byte SHA-1 fingerprint of the certificate. |
| ssl_server_cert_fingerprint_sha256 | The hex-encoded, 32-byte SHA-256 fingerprint of the certificate. |
| ssl_server_cert_issuer | The issuer of the certificate. |
| ssl_server_cert_private_key | The private key of the certificate (if available). |
| ssl_server_cert_private_key_available | Whether a PrivateKey is available for the selected certificate. |
| ssl_server_cert_private_key_container | The name of the PrivateKey container for the certificate (if available). |
| ssl_server_cert_public_key | The public key of the certificate. |
| ssl_server_cert_public_key_algorithm | The textual description of the certificate's public key algorithm. |
| ssl_server_cert_public_key_length | The length of the certificate's public key (in bits). |
| ssl_server_cert_serial_number | The serial number of the certificate encoded as a string. |
| ssl_server_cert_signature_algorithm | The text description of the certificate's signature algorithm. |
| ssl_server_cert_store | The name of the certificate store for the client certificate. |
| ssl_server_cert_store_password | If the type of certificate store requires a password, this property is used to specify the password needed to open the certificate store. |
| ssl_server_cert_store_type | The type of certificate store for this certificate. |
| ssl_server_cert_subject_alt_names | Comma-separated lists of alternative subject names for the certificate. |
| ssl_server_cert_thumbprint_md5 | The MD5 hash of the certificate. |
| ssl_server_cert_thumbprint_sha1 | The SHA-1 hash of the certificate. |
| ssl_server_cert_thumbprint_sha256 | The SHA-256 hash of the certificate. |
| ssl_server_cert_usage | The text description of UsageFlags . |
| ssl_server_cert_usage_flags | The flags that show intended use for the certificate. |
| ssl_server_cert_version | The certificate's version number. |
| ssl_server_cert_subject | The subject of the certificate used for client authentication. |
| ssl_server_cert_encoded | The certificate (PEM/Base64 encoded). |
| start_byte | The byte offset from which to start downloading a file. |
| timeout | The timeout for the struct. |
| url | The URL of the Hadoop WebHDFS server. |
| user | The user name to use for authentication. |
Method List
The following is the full list of the methods of the struct with short descriptions. Click on the links for further details.
| add_query_param | Adds a query parameter to the QueryParams properties. |
| append_file | Appends data to an existing file. |
| calc_authorization | Calculates the Authorization header based on provided credentials. |
| config | Sets or retrieves a configuration setting. |
| delete_resource | Deletes a resource. |
| do_custom_op | Executes an arbitrary WebHDFS operation. |
| download_file | Downloads a file. |
| get_dir_summary | Gets a content summary for a directory. |
| get_resource_info | Gets information about a specific resource. |
| interrupt | This method interrupts the current method. |
| join_file_blocks | Joins multiple files' blocks together into one file. |
| list_resources | Lists resources in a given directory. |
| make_directory | Makes a directory. |
| move_resource | Moves a resource. |
| reset | Resets the struct to its initial state. |
| set_file_replication | Sets the replication factor for a file. |
| set_owner | Sets a resource's owner and/or group. |
| set_permission | Assigns the given permission to a resource. |
| set_times | Sets a resource's modification and/or access times. |
| truncate_file | Truncates a file to a given size. |
| upload_file | Uploads a file. |
Event List
The following is the full list of the events fired by the struct with short descriptions. Click on the links for further details.
| on_end_transfer | This event fires when a document finishes transferring. |
| on_error | Fired when information is available about errors during data delivery. |
| on_header | Fired every time a header line comes in. |
| on_log | Fired once for each log message. |
| on_progress | Fires during an upload or download to indicate transfer progress. |
| on_resource_list | Fires once for each resource returned when listing resources. |
| on_ssl_server_authentication | Fired after the server presents its certificate to the client. |
| on_ssl_status | Fired when secure connection progress messages are available. |
| on_start_transfer | This event fires when a document starts transferring (after the headers). |
| on_transfer | Fired while a document transfers (delivers document). |
Config Settings
The following is a list of config settings for the struct with short descriptions. Click on the links for further details.
| CreatePermission | The permission to assign when creating resources. |
| DownloadTempFile | The temporary file used when downloading encrypted data. |
| EncryptionIV | The initialization vector to be used for encryption/decryption. |
| EncryptionKey | The key to use during encryption/decryption. |
| EncryptionPasswordKDF | The KDF algorithm to use during password based encryption and decryption. |
| HomeDir | Can be queried to obtain the current user's home directory path. |
| ProgressAbsolute | Whether the struct should track transfer progress absolutely. |
| ProgressStep | How often the progress event should be fired, in terms of percentage. |
| RawRequest | Returns the data that was sent to the server. |
| RawResponse | Returns the data that was received from the server. |
| RecursiveDelete | Whether to recursively delete non-empty directories. |
| TempPath | The path to the directory where temporary files are created. |
| XChildCount | The number of child elements of the current element. |
| XChildName[i] | The name of the child element. |
| XChildXText[i] | The inner text of the child element. |
| XElement | The name of the current element. |
| XParent | The parent of the current element. |
| XPath | Provides a way to point to a specific element in the returned XML or JSON response. |
| XSubTree | A snapshot of the current element in the document. |
| XText | The text of the current element. |
| AcceptEncoding | Used to tell the server which types of content encodings the client supports. |
| AllowHTTPCompression | This property enables HTTP compression for receiving data. |
| AllowHTTPFallback | Whether HTTP/2 connections are permitted to fallback to HTTP/1.1. |
| Append | Whether to append data to LocalFile. |
| Authorization | The Authorization string to be sent to the server. |
| BytesTransferred | Contains the number of bytes transferred in the response data. |
| ChunkSize | Specifies the chunk size in bytes when using chunked encoding. |
| CompressHTTPRequest | Set to true to compress the body of a PUT or POST request. |
| EncodeURL | If set to True the URL will be encoded by the struct. |
| FollowRedirects | Determines what happens when the server issues a redirect. |
| GetOn302Redirect | If set to True the struct will perform a GET on the new location. |
| HTTP2HeadersWithoutIndexing | HTTP2 headers that should not update the dynamic header table with incremental indexing. |
| HTTPVersion | The version of HTTP used by the struct. |
| IfModifiedSince | A date determining the maximum age of the desired document. |
| KeepAlive | Determines whether the HTTP connection is closed after completion of the request. |
| KerberosSPN | The Service Principal Name for the Kerberos Domain Controller. |
| LogLevel | The level of detail that is logged. |
| MaxRedirectAttempts | Limits the number of redirects that are followed in a request. |
| NegotiatedHTTPVersion | The negotiated HTTP version. |
| OtherHeaders | Other headers as determined by the user (optional). |
| ProxyAuthorization | The authorization string to be sent to the proxy server. |
| ProxyAuthScheme | The authorization scheme to be used for the proxy. |
| ProxyPassword | A password if authentication is to be used for the proxy. |
| ProxyPort | Port for the proxy server (default 80). |
| ProxyServer | Name or IP address of a proxy server (optional). |
| ProxyUser | A user name if authentication is to be used for the proxy. |
| SentHeaders | The full set of headers as sent by the client. |
| StatusCode | The status code of the last response from the server. |
| StatusLine | The first line of the last response from the server. |
| TransferredData | The contents of the last response from the server. |
| TransferredDataLimit | The maximum number of incoming bytes to be stored by the struct. |
| TransferredHeaders | The full set of headers as received from the server. |
| TransferredRequest | The full request as sent by the client. |
| UseChunkedEncoding | Enables or Disables HTTP chunked encoding for transfers. |
| UseIDNs | Whether to encode hostnames to internationalized domain names. |
| UseProxyAutoConfigURL | Whether to use a Proxy auto-config file when attempting a connection. |
| UserAgent | Information about the user agent (browser). |
| ConnectionTimeout | Sets a separate timeout value for establishing a connection. |
| FirewallAutoDetect | Tells the struct whether or not to automatically detect and use firewall system settings, if available. |
| FirewallHost | Name or IP address of firewall (optional). |
| FirewallPassword | Password to be used if authentication is to be used when connecting through the firewall. |
| FirewallPort | The TCP port for the FirewallHost;. |
| FirewallType | Determines the type of firewall to connect through. |
| FirewallUser | A user name if authentication is to be used connecting through a firewall. |
| KeepAliveInterval | The retry interval, in milliseconds, to be used when a TCP keep-alive packet is sent and no response is received. |
| KeepAliveTime | The inactivity time in milliseconds before a TCP keep-alive packet is sent. |
| Linger | When set to True, connections are terminated gracefully. |
| LingerTime | Time in seconds to have the connection linger. |
| LocalHost | The name of the local host through which connections are initiated or accepted. |
| LocalPort | The port in the local host where the struct binds. |
| MaxLineLength | The maximum amount of data to accumulate when no EOL is found. |
| MaxTransferRate | The transfer rate limit in bytes per second. |
| ProxyExceptionsList | A semicolon separated list of hosts and IPs to bypass when using a proxy. |
| TCPKeepAlive | Determines whether or not the keep alive socket option is enabled. |
| TcpNoDelay | Whether or not to delay when sending packets. |
| UseIPv6 | Whether to use IPv6. |
| UseNTLMv2 | Whether to use NTLM V2. |
| LogSSLPackets | Controls whether SSL packets are logged when using the internal security API. |
| OpenSSLCADir | The path to a directory containing CA certificates. |
| OpenSSLCAFile | Name of the file containing the list of CA's trusted by your application. |
| OpenSSLCipherList | A string that controls the ciphers to be used by SSL. |
| OpenSSLPrngSeedData | The data to seed the pseudo random number generator (PRNG). |
| ReuseSSLSession | Determines if the SSL session is reused. |
| SSLCACerts | A newline separated list of CA certificates to be included when performing an SSL handshake. |
| SSLCheckCRL | Whether to check the Certificate Revocation List for the server certificate. |
| SSLCheckOCSP | Whether to use OCSP to check the status of the server certificate. |
| SSLCipherStrength | The minimum cipher strength used for bulk encryption. |
| SSLClientCACerts | A newline separated list of CA certificates to use during SSL client certificate validation. |
| SSLEnabledCipherSuites | The cipher suite to be used in an SSL negotiation. |
| SSLEnabledProtocols | Used to enable/disable the supported security protocols. |
| SSLEnableRenegotiation | Whether the renegotiation_info SSL extension is supported. |
| SSLIncludeCertChain | Whether the entire certificate chain is included in the SSLServerAuthentication event. |
| SSLKeyLogFile | The location of a file where per-session secrets are written for debugging purposes. |
| SSLNegotiatedCipher | Returns the negotiated cipher suite. |
| SSLNegotiatedCipherStrength | Returns the negotiated cipher suite strength. |
| SSLNegotiatedCipherSuite | Returns the negotiated cipher suite. |
| SSLNegotiatedKeyExchange | Returns the negotiated key exchange algorithm. |
| SSLNegotiatedKeyExchangeStrength | Returns the negotiated key exchange algorithm strength. |
| SSLNegotiatedVersion | Returns the negotiated protocol version. |
| SSLSecurityFlags | Flags that control certificate verification. |
| SSLServerCACerts | A newline separated list of CA certificates to use during SSL server certificate validation. |
| TLS12SignatureAlgorithms | Defines the allowed TLS 1.2 signature algorithms when SSLProvider is set to Internal. |
| TLS12SupportedGroups | The supported groups for ECC. |
| TLS13KeyShareGroups | The groups for which to pregenerate key shares. |
| TLS13SignatureAlgorithms | The allowed certificate signature algorithms. |
| TLS13SupportedGroups | The supported groups for (EC)DHE key exchange. |
| AbsoluteTimeout | Determines whether timeouts are inactivity timeouts or absolute timeouts. |
| FirewallData | Used to send extra data to the firewall. |
| InBufferSize | The size in bytes of the incoming queue of the socket. |
| OutBufferSize | The size in bytes of the outgoing queue of the socket. |
| BuildInfo | Information about the product's build. |
| CodePage | The system code page used for Unicode to Multibyte translations. |
| LicenseInfo | Information about the current license. |
| MaskSensitiveData | Whether sensitive data is masked in log messages. |
| UseInternalSecurityAPI | Whether or not to use the system security libraries or an internal implementation. |
auth_mechanism property (HadoopDFS Struct)
The authentication mechanism to use when connecting to the server.
Syntax
fn auth_mechanism(&self ) -> Result<i32, CloudFilesError>
fn set_auth_mechanism(&self, value : i32) -> Option<CloudFilesError>
Possible Values
0 // None
1 // Simple
2 // Basic
3 // NTLM
4 // Negotiate
5 // OAuth
Default Value
0
Remarks
This property controls what authentication mechanism the struct should use when connecting to the server.
Possible values are:
| 0 (amNone - default) | No authentication is performed. |
| 1 (amSimple) | Hadoop pseudo/simple authentication is performed. |
| 2 (amBasic) | Basic authentication is performed. |
| 3 (amNTLM) | NTLM authentication is performed. |
| 4 (amNegotiate) | Negotiate authentication is performed. |
| 5 (amOAuth) | OAuth authentication is performed. |
When set to 1 (amSimple), the value of the user property is automatically sent in every request using the user.name query parameter.
When set to 2 (amBasic), 3 (amNTLM), or 4 (amNegotiate), the values held by the user and password properties will be used to perform Basic, NTLM, or Negotiate (e.g., Kerberos SPNEGO) authentication.
When set to 5 (amOAuth), the value of the authorization property is automatically sent in every request using the Authorization HTTP header.
Data Type
i32
authorization property (HadoopDFS Struct)
The OAuth 2.0 authorization token.
Syntax
fn authorization(&self ) -> Result<String, CloudFilesError>
fn set_authorization(&self, value : &str) -> Option<CloudFilesError> fn set_authorization_ref(&self, value : &String) -> Option<CloudFilesError>
Default Value
""
Remarks
This property is used to specify the OAuth 2.0 authorization token that was obtained as a result of performing OAuth. A valid OAuth token will come in the following format:
Bearer ACCESS_TOKEN
The ACCESS_TOKEN segment must be supplied to this property before attempting any other operations. Consult the documentation for the service for more information about supported scope values and more details on performing OAuth.
Refer to auth_mechanism for more information.
Data Type
String
dir_summary_dir_count property (HadoopDFS Struct)
The number of subdirectories within the directory.
Syntax
fn dir_summary_dir_count(&self ) -> Result<i64, CloudFilesError>
Default Value
0
Remarks
The number of subdirectories within the directory.
This property reflects the number of subdirectories contained within the directory, calculated recursively. Note that this count will always include the directory itself; i.e., this property would return 1 for a directory with no subfolders.
This property is read-only.
Data Type
i64
dir_summary_file_count property (HadoopDFS Struct)
The number of files within the directory.
Syntax
fn dir_summary_file_count(&self ) -> Result<i64, CloudFilesError>
Default Value
0
Remarks
The number of files within the directory.
This property reflects the number of files contained within the directory, calculated recursively.
This property is read-only.
Data Type
i64
dir_summary_name_quota property (HadoopDFS Struct)
The name quota imposed on the directory.
Syntax
fn dir_summary_name_quota(&self ) -> Result<i64, CloudFilesError>
Default Value
-1
Remarks
The name quota imposed on the directory.
This property reflects the name quota imposed on the directory, or -1 if the directory doesn't have a name quota set.
A name quota limits the number of files and directories that can be created within a directory (calculated recursively). Note that a directory's own name is counted against its own quota, so the minimum name quota that may be applied to a directory is 1 (which will force the directory to stay empty).
This property is read-only.
Data Type
i64
dir_summary_size property (HadoopDFS Struct)
The total size of the directory contents, excluding file replicas.
Syntax
fn dir_summary_size(&self ) -> Result<i64, CloudFilesError>
Default Value
0
Remarks
The total size of the directory contents, excluding file replicas.
This property reflects the total size (in bytes) of the directory's contents, calculated recursively.
Unlike dir_summary_space_used, this property's value does not take file replicas into account, and thus should not be interpreted as the actual number of bytes the directory's contents use on disk.
This property is read-only.
Data Type
i64
dir_summary_space_quota property (HadoopDFS Struct)
The space quota imposed on the directory.
Syntax
fn dir_summary_space_quota(&self ) -> Result<i64, CloudFilesError>
Default Value
-1
Remarks
The space quota imposed on the directory.
This property reflects the overall space quota (in bytes) imposed on the directory, or -1 if the directory doesn't have a space quota set.
A space quota limits the total number of bytes the files within a directory (or any of its subdirectories) may consume across all storage mediums. Space quotas are tracked separately from each directory's various dir_summary_storage_quotas, and only one space quota may be applied to any given directory.
This property is read-only.
Data Type
i64
dir_summary_space_used property (HadoopDFS Struct)
The total amount of space the directory consumes on disk.
Syntax
fn dir_summary_space_used(&self ) -> Result<i64, CloudFilesError>
Default Value
0
Remarks
The total amount of space the directory consumes on disk.
This property reflects the total amount of space (in bytes) that the directory's contents, calculated recursively, consume on disk.
Unlike dir_summary_size, this property's value includes space consumed by file replicas.
This property is read-only.
Data Type
i64
dir_summary_storage_quota property (HadoopDFS Struct)
The storage type quota imposed on the directory.
Syntax
fn dir_summary_storage_quota(&self ) -> Result<i64, CloudFilesError>
Default Value
-1
Remarks
The storage type quota imposed on the directory.
This property reflects the storage type quota (in bytes) imposed on the directory for the dir_summary_storage_quota_type currently selected by dir_summary_storage_quota_index.
Storage type quotas limit the number of bytes that files within a directory (or any of its subdirectories) may consume on specific types of storage mediums. Multiple storage type quotas, each associated with a different type of storage medium, may be imposed on a directory simultaneously. Storage type quotas are tracked separately from a directory's overall dir_summary_space_quota.
Use dir_summary_storage_quota_count to determine how many storage type quotas are imposed on a directory, and dir_summary_storage_quota_index to select which storage type quota's information to reflect in the dir_summary_storage_quota, dir_summary_storage_quota_type, and dir_summary_storage_quota_used properties.
Note that it is possible for a storage type quota to be imposed on a directory without actually restricting any space usage, in which case this property's value will be -1.
This property is read-only.
Data Type
i64
dir_summary_storage_quota_count property (HadoopDFS Struct)
The number of storage type quotas associated with the directory.
Syntax
fn dir_summary_storage_quota_count(&self ) -> Result<i32, CloudFilesError>
Default Value
0
Remarks
The number of storage type quotas associated with the directory.
This property reflects the number of storage type quotas associated with the directory. Use the dir_summary_storage_quota_index property to select which storage type quota's information to reflect in the dir_summary_storage_quota, dir_summary_storage_quota_type, and dir_summary_storage_quota_used properties.
This property is read-only.
Data Type
i32
dir_summary_storage_quota_index property (HadoopDFS Struct)
Selects the storage type quota to show information for.
Syntax
fn dir_summary_storage_quota_index(&self ) -> Result<i32, CloudFilesError>
Default Value
-1
Remarks
Selects the storage type quota to show information for.
This property selects which storage type quota's information to reflect in the dir_summary_storage_quota, dir_summary_storage_quota_type, and dir_summary_storage_quota_used properties; those properties are re-populated when this property's value is changed.
Valid values for this property are -1 to (dir_summary_storage_quota_count - 1); invalid indices are ignored. The default value is 0 if dir_summary_storage_quota_count is greater than 0, and -1 otherwise.
This property is read-only.
Data Type
i32
dir_summary_storage_quota_type property (HadoopDFS Struct)
The storage type associated with the storage type quota.
Syntax
fn dir_summary_storage_quota_type(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The storage type associated with the storage type quota.
This property reflects the storage type associated with the storage type quota currently selected by dir_summary_storage_quota_index
This property is read-only.
Data Type
String
dir_summary_storage_quota_used property (HadoopDFS Struct)
The number of bytes consumed for the storage type quota.
Syntax
fn dir_summary_storage_quota_used(&self ) -> Result<i64, CloudFilesError>
Default Value
0
Remarks
The number of bytes consumed for the storage type quota.
This property reflects the number of bytes that have been consumed for the storage type quota currently selected by dir_summary_storage_quota_index
This property is read-only.
Data Type
i64
encryption_algorithm property (HadoopDFS Struct)
The encryption algorithm.
Syntax
fn encryption_algorithm(&self ) -> Result<i32, CloudFilesError>
fn set_encryption_algorithm(&self, value : i32) -> Option<CloudFilesError>
Possible Values
0 // AES
1 // Blowfish
2 // CAST
3 // DES
4 // IDEA
5 // RC2
6 // RC4
7 // TEA
8 // TripleDES
9 // Twofish
10 // Rijndael
11 // ChaCha
12 // XSalsa20
Default Value
0
Remarks
This property specifies the encryption algorithm to be used. The maximum allowable key size is automatically used for the selected algorithm. Possible values are:
| Algorithm | Key Size |
| 0 (eaAES - default) | 256 |
| 1 (eaBlowfish) | 448 |
| 2 (eaCAST) | 128 |
| 3 (eaDES) | 64 |
| 4 (eaIDEA) | 128 |
| 5 (eaRC2) | 128 |
| 6 (eaRC4) | 2048 |
| 7 (eaTEA) | 128 |
| 8 (eaTripleDES) | 192 |
| 9 (eaTwofish) | 256 |
| 10 (eaRijndael) | 256 |
| 11 (eaChaCha) | 256 |
| 12 (eaXSalsa20) | 256 |
Data Type
i32
encryption_password property (HadoopDFS Struct)
The encryption password.
Syntax
fn encryption_password(&self ) -> Result<String, CloudFilesError>
fn set_encryption_password(&self, value : &str) -> Option<CloudFilesError> fn set_encryption_password_ref(&self, value : &String) -> Option<CloudFilesError>
Default Value
""
Remarks
If this property is populated when upload_file or download_file is called, the struct will attempt to encrypt or decrypt the data before uploading or after downloading it.
The struct uses the value specified here to generate the necessary encryption Key and IV values using the PKCS5 password digest algorithm. This provides a simpler alternative to creating and managing Key and IV values directly.
However, it is also possible to explicitly specify the Key and IV values to use by setting the EncryptionKey and EncryptionIV configuration settings. This may be necessary if, e.g., the data needs to be encrypted/decrypted by another utility which generates Key and IV values differently.
Data Type
String
firewall_auto_detect property (HadoopDFS Struct)
Whether to automatically detect and use firewall system settings, if available.
Syntax
fn firewall_auto_detect(&self ) -> Result<bool, CloudFilesError>
fn set_firewall_auto_detect(&self, value : bool) -> Option<CloudFilesError>
Default Value
false
Remarks
Whether to automatically detect and use firewall system settings, if available.
Data Type
bool
firewall_type property (HadoopDFS Struct)
The type of firewall to connect through.
Syntax
fn firewall_type(&self ) -> Result<i32, CloudFilesError>
fn set_firewall_type(&self, value : i32) -> Option<CloudFilesError>
Possible Values
0 // None
1 // Tunnel
2 // SOCKS4
3 // SOCKS5
10 // SOCKS4A
Default Value
0
Remarks
The type of firewall to connect through. The applicable values are as follows:
| fwNone (0) | No firewall (default setting). |
| fwTunnel (1) | Connect through a tunneling proxy. firewall_port is set to 80. |
| fwSOCKS4 (2) | Connect through a SOCKS4 Proxy. firewall_port is set to 1080. |
| fwSOCKS5 (3) | Connect through a SOCKS5 Proxy. firewall_port is set to 1080. |
| fwSOCKS4A (10) | Connect through a SOCKS4A Proxy. firewall_port is set to 1080. |
Data Type
i32
firewall_host property (HadoopDFS Struct)
The name or IP address of the firewall (optional).
Syntax
fn firewall_host(&self ) -> Result<String, CloudFilesError>
fn set_firewall_host(&self, value : &str) -> Option<CloudFilesError> fn set_firewall_host_ref(&self, value : &String) -> Option<CloudFilesError>
Default Value
""
Remarks
The name or IP address of the firewall (optional). If a firewall_host is given, the requested connections will be authenticated through the specified firewall when connecting.
If this property is set to a Domain Name, a DNS request is initiated. Upon successful termination of the request, this property is set to the corresponding address. If the search is not successful, the struct fails with an error.
Data Type
String
firewall_password property (HadoopDFS Struct)
A password if authentication is to be used when connecting through the firewall.
Syntax
fn firewall_password(&self ) -> Result<String, CloudFilesError>
fn set_firewall_password(&self, value : &str) -> Option<CloudFilesError> fn set_firewall_password_ref(&self, value : &String) -> Option<CloudFilesError>
Default Value
""
Remarks
A password if authentication is to be used when connecting through the firewall. If firewall_host is specified, the firewall_user and firewall_password properties are used to connect and authenticate to the given firewall. If the authentication fails, the struct fails with an error.
Data Type
String
firewall_port property (HadoopDFS Struct)
The Transmission Control Protocol (TCP) port for the firewall Host .
Syntax
fn firewall_port(&self ) -> Result<i32, CloudFilesError>
fn set_firewall_port(&self, value : i32) -> Option<CloudFilesError>
Default Value
0
Remarks
The Transmission Control Protocol (TCP) port for the firewall firewall_host. See the description of the firewall_host property for details.
NOTE: This property is set automatically when firewall_firewall_type is set to a valid value. See the description of the firewall_firewall_type property for details.
Data Type
i32
firewall_user property (HadoopDFS Struct)
A username if authentication is to be used when connecting through a firewall.
Syntax
fn firewall_user(&self ) -> Result<String, CloudFilesError>
fn set_firewall_user(&self, value : &str) -> Option<CloudFilesError> fn set_firewall_user_ref(&self, value : &String) -> Option<CloudFilesError>
Default Value
""
Remarks
A username if authentication is to be used when connecting through a firewall. If firewall_host is specified, this property and the firewall_password property are used to connect and authenticate to the given Firewall. If the authentication fails, the struct fails with an error.
Data Type
String
idle property (HadoopDFS Struct)
The current status of the struct.
Syntax
fn idle(&self ) -> Result<bool, CloudFilesError>
Default Value
true
Remarks
This property will be False if the component is currently busy (communicating or waiting for an answer), and True at all other times.
This property is read-only.
Data Type
bool
local_file property (HadoopDFS Struct)
The location of the local file.
Syntax
fn local_file(&self ) -> Result<String, CloudFilesError>
fn set_local_file(&self, value : &str) -> Option<CloudFilesError> fn set_local_file_ref(&self, value : &String) -> Option<CloudFilesError>
Default Value
""
Remarks
This property specifies the location of a file on disk. This is used as the source file when calling upload_file or append_file, and as the destination file when calling download_file or do_custom_op.
Data Type
String
local_host property (HadoopDFS Struct)
The name of the local host or user-assigned IP interface through which connections are initiated or accepted.
Syntax
fn local_host(&self ) -> Result<String, CloudFilesError>
fn set_local_host(&self, value : &str) -> Option<CloudFilesError> fn set_local_host_ref(&self, value : &String) -> Option<CloudFilesError>
Default Value
""
Remarks
This property contains the name of the local host as obtained by the gethostname() system call, or if the user has assigned an IP address, the value of that address.
In multihomed hosts (machines with more than one IP interface) setting LocalHost to the IP address of an interface will make the struct initiate connections (or accept in the case of server structs) only through that interface. It is recommended to provide an IP address rather than a hostname when setting this property to ensure the desired interface is used.
If the struct is connected, the local_host property shows the IP address of the interface through which the connection is made in internet dotted format (aaa.bbb.ccc.ddd). In most cases, this is the address of the local host, except for multihomed hosts (machines with more than one IP interface).
NOTE: local_host is not persistent. You must always set it in code, and never in the property window.
Data Type
String
other_headers property (HadoopDFS Struct)
Other headers as determined by the user (optional).
Syntax
fn other_headers(&self ) -> Result<String, CloudFilesError>
fn set_other_headers(&self, value : &str) -> Option<CloudFilesError> fn set_other_headers_ref(&self, value : &String) -> Option<CloudFilesError>
Default Value
""
Remarks
This property can be set to a string of headers to be appended to the HTTP request headers created from other properties like content_type and from.
The headers must follow the format Header: Value as described in the HTTP specifications. Header lines should be separated by CRLF ("\r\n") .
Use this property with caution. If this property contains invalid headers, HTTP requests may fail.
This property is useful for extending the functionality of the struct beyond what is provided.
Data Type
String
overwrite property (HadoopDFS Struct)
Whether to overwrite the local or remote file.
Syntax
fn overwrite(&self ) -> Result<bool, CloudFilesError>
fn set_overwrite(&self, value : bool) -> Option<CloudFilesError>
Default Value
false
Remarks
When calling download_file, this property determines if local_file should be overwritten if it already exists.
When calling upload_file, this property determines if the remote file should be overwritten if it already exists.
Data Type
bool
parsed_header_count property (HadoopDFS Struct)
The number of records in the ParsedHeader arrays.
Syntax
fn parsed_header_count(&self ) -> Result<i32, CloudFilesError>
Default Value
0
Remarks
This property controls the size of the following arrays:
The array indices start at 0 and end at parsed_header_count - 1.This property is read-only.
Data Type
i32
parsed_header_field property (HadoopDFS Struct)
This property contains the name of the HTTP header (this is the same case as it is delivered).
Syntax
fn parsed_header_field(&self , ParsedHeaderIndex : i32) -> Result<String, CloudFilesError>
Default Value
""
Remarks
This property contains the name of the HTTP Header (this is the same case as it is delivered).
The ParsedHeaderIndex parameter specifies the index of the item in the array. The size of the array is controlled by the ParsedHeaderCount property.
This property is read-only.
Data Type
String
parsed_header_value property (HadoopDFS Struct)
This property contains the header contents.
Syntax
fn parsed_header_value(&self , ParsedHeaderIndex : i32) -> Result<String, CloudFilesError>
Default Value
""
Remarks
This property contains the Header contents.
The ParsedHeaderIndex parameter specifies the index of the item in the array. The size of the array is controlled by the ParsedHeaderCount property.
This property is read-only.
Data Type
String
password property (HadoopDFS Struct)
The password to use for authentication.
Syntax
fn password(&self ) -> Result<String, CloudFilesError>
fn set_password(&self, value : &str) -> Option<CloudFilesError> fn set_password_ref(&self, value : &String) -> Option<CloudFilesError>
Default Value
""
Remarks
This property specifies the password to user for authentication.
Refer to auth_mechanism for more information.
Data Type
String
proxy_auth_scheme property (HadoopDFS Struct)
The type of authorization to perform when connecting to the proxy.
Syntax
fn proxy_auth_scheme(&self ) -> Result<i32, CloudFilesError>
fn set_proxy_auth_scheme(&self, value : i32) -> Option<CloudFilesError>
Possible Values
0 // Basic
1 // Digest
2 // Proprietary
3 // None
4 // Ntlm
5 // Negotiate
Default Value
0
Remarks
The type of authorization to perform when connecting to the proxy. This is used only when the proxy_user and proxy_password properties are set.
proxy_auth_scheme should be set to authNone (3) when no authentication is expected.
By default, proxy_auth_scheme is authBasic (0), and if the proxy_user and proxy_password properties are set, the struct will attempt basic authentication.
If proxy_auth_scheme is set to authDigest (1), digest authentication will be attempted instead.
If proxy_auth_scheme is set to authProprietary (2), then the authorization token will not be generated by the struct. Look at the configuration file for the struct being used to find more information about manually setting this token.
If proxy_auth_scheme is set to authNtlm (4), NTLM authentication will be used.
For security reasons, setting this property will clear the values of proxy_user and proxy_password.
Data Type
i32
proxy_auto_detect property (HadoopDFS Struct)
Whether to automatically detect and use proxy system settings, if available.
Syntax
fn proxy_auto_detect(&self ) -> Result<bool, CloudFilesError>
fn set_proxy_auto_detect(&self, value : bool) -> Option<CloudFilesError>
Default Value
false
Remarks
Whether to automatically detect and use proxy system settings, if available. The default value is false.
Data Type
bool
proxy_password property (HadoopDFS Struct)
A password if authentication is to be used for the proxy.
Syntax
fn proxy_password(&self ) -> Result<String, CloudFilesError>
fn set_proxy_password(&self, value : &str) -> Option<CloudFilesError> fn set_proxy_password_ref(&self, value : &String) -> Option<CloudFilesError>
Default Value
""
Remarks
A password if authentication is to be used for the proxy.
If proxy_auth_scheme is set to Basic Authentication, the proxy_user and proxy_password properties are Base64 encoded and the proxy authentication token will be generated in the form Basic [encoded-user-password].
If proxy_auth_scheme is set to Digest Authentication, the proxy_user and proxy_password properties are used to respond to the Digest Authentication challenge from the server.
If proxy_auth_scheme is set to NTLM Authentication, the proxy_user and proxy_password properties are used to authenticate through NTLM negotiation.
Data Type
String
proxy_port property (HadoopDFS Struct)
The Transmission Control Protocol (TCP) port for the proxy Server (default 80).
Syntax
fn proxy_port(&self ) -> Result<i32, CloudFilesError>
fn set_proxy_port(&self, value : i32) -> Option<CloudFilesError>
Default Value
80
Remarks
The Transmission Control Protocol (TCP) port for the proxy proxy_server (default 80). See the description of the proxy_server property for details.
Data Type
i32
proxy_server property (HadoopDFS Struct)
If a proxy Server is given, then the HTTP request is sent to the proxy instead of the server otherwise specified.
Syntax
fn proxy_server(&self ) -> Result<String, CloudFilesError>
fn set_proxy_server(&self, value : &str) -> Option<CloudFilesError> fn set_proxy_server_ref(&self, value : &String) -> Option<CloudFilesError>
Default Value
""
Remarks
If a proxy proxy_server is given, then the HTTP request is sent to the proxy instead of the server otherwise specified.
If the proxy_server property is set to a domain name, a DNS request is initiated. Upon successful termination of the request, the proxy_server property is set to the corresponding address. If the search is not successful, an error is returned.
Data Type
String
proxy_ssl property (HadoopDFS Struct)
When to use a Secure Sockets Layer (SSL) for the connection to the proxy.
Syntax
fn proxy_ssl(&self ) -> Result<i32, CloudFilesError>
fn set_proxy_ssl(&self, value : i32) -> Option<CloudFilesError>
Possible Values
0 // Automatic
1 // Always
2 // Never
3 // Tunnel
Default Value
0
Remarks
When to use a Secure Sockets Layer (SSL) for the connection to the proxy. The applicable values are as follows:
| psAutomatic (0) | Default setting. If the URL is an https URL, the struct will use the psTunnel option. If the URL is an http URL, the struct will use the psNever option. |
| psAlways (1) | The connection is always SSL-enabled. |
| psNever (2) | The connection is not SSL-enabled. |
| psTunnel (3) | The connection is made through a tunneling (HTTP) proxy. |
Data Type
i32
proxy_user property (HadoopDFS Struct)
A username if authentication is to be used for the proxy.
Syntax
fn proxy_user(&self ) -> Result<String, CloudFilesError>
fn set_proxy_user(&self, value : &str) -> Option<CloudFilesError> fn set_proxy_user_ref(&self, value : &String) -> Option<CloudFilesError>
Default Value
""
Remarks
A username if authentication is to be used for the proxy.
If proxy_auth_scheme is set to Basic Authentication, the proxy_user and proxy_password properties are Base64 encoded and the proxy authentication token will be generated in the form Basic [encoded-user-password].
If proxy_auth_scheme is set to Digest Authentication, the proxy_user and proxy_password properties are used to respond to the Digest Authentication challenge from the server.
If proxy_auth_scheme is set to NTLM Authentication, the proxy_user and proxy_password properties are used to authenticate through NTLM negotiation.
Data Type
String
query_param_count property (HadoopDFS Struct)
The number of records in the QueryParam arrays.
Syntax
fn query_param_count(&self ) -> Result<i32, CloudFilesError>
fn set_query_param_count(&self, value : i32) -> Option<CloudFilesError>
Default Value
0
Remarks
This property controls the size of the following arrays:
The array indices start at 0 and end at query_param_count - 1.Data Type
i32
query_param_name property (HadoopDFS Struct)
The name of the query parameter.
Syntax
fn query_param_name(&self , QueryParamIndex : i32) -> Result<String, CloudFilesError>
fn set_query_param_name(&self, QueryParamIndex : i32, value : &str) -> Option<CloudFilesError> fn set_query_param_name_ref(&self, QueryParamIndex : i32, value : &String) -> Option<CloudFilesError>
Default Value
""
Remarks
The name of the query parameter.
This property specifies the name of the query parameter.
The QueryParamIndex parameter specifies the index of the item in the array. The size of the array is controlled by the QueryParamCount property.
Data Type
String
query_param_value property (HadoopDFS Struct)
The value of the query parameter.
Syntax
fn query_param_value(&self , QueryParamIndex : i32) -> Result<String, CloudFilesError>
fn set_query_param_value(&self, QueryParamIndex : i32, value : &str) -> Option<CloudFilesError> fn set_query_param_value_ref(&self, QueryParamIndex : i32, value : &String) -> Option<CloudFilesError>
Default Value
""
Remarks
The value of the query parameter.
This property specifies the value of the query parameter. The struct will automatically URL-encode this value when sending the request.
The QueryParamIndex parameter specifies the index of the item in the array. The size of the array is controlled by the QueryParamCount property.
Data Type
String
read_bytes property (HadoopDFS Struct)
The number of bytes to read when downloading a file.
Syntax
fn read_bytes(&self ) -> Result<i64, CloudFilesError>
fn set_read_bytes(&self, value : i64) -> Option<CloudFilesError>
Default Value
-1
Remarks
This property specifies how many bytes should be read when download_file is called. It can be used in tandem with start_byte to specify a specific range of the file to download.
If set to -1 (default), there is no limit on how many bytes will be read.
Data Type
i64
resource_data property (HadoopDFS Struct)
The data that was downloaded, or that should be uploaded.
Syntax
fn resource_data(&self ) -> Result<Vec<u8>, CloudFilesError>
fn set_resource_data(&self, value : Vec<u8>) -> Option<CloudFilesError> fn set_resource_data_ref(&self, value : &[u8]) -> Option<CloudFilesError>
Default Value
""
Remarks
This property is populated with file data after calling download_file if local_file is not set.
This property can also be set before calling upload_file; its data will be uploaded if local_file is not set.
Data Type
Vec
resource_count property (HadoopDFS Struct)
The number of records in the Resource arrays.
Syntax
fn resource_count(&self ) -> Result<i32, CloudFilesError>
Default Value
0
Remarks
This property controls the size of the following arrays:
- resource_access_time
- resource_block_size
- resource_child_count
- resource_group
- resource_modified_time
- resource_name
- resource_owner
- resource_path
- resource_permission
- resource_replication
- resource_size
- resource_symlink_target
- resource_type
This property is read-only.
Data Type
i32
resource_access_time property (HadoopDFS Struct)
The last access time of the resource.
Syntax
fn resource_access_time(&self , ResourceIndex : i32) -> Result<i64, CloudFilesError>
Default Value
0
Remarks
The last access time of the resource.
This property reflects the last access time of the resource, in milliseconds relative to the Unix epoch.
The ResourceIndex parameter specifies the index of the item in the array. The size of the array is controlled by the ResourceCount property.
This property is read-only.
Data Type
i64
resource_block_size property (HadoopDFS Struct)
The block size of the file.
Syntax
fn resource_block_size(&self , ResourceIndex : i32) -> Result<i64, CloudFilesError>
Default Value
0
Remarks
The block size of the file.
This property reflects the block size of the file, in bytes.
The ResourceIndex parameter specifies the index of the item in the array. The size of the array is controlled by the ResourceCount property.
This property is read-only.
Data Type
i64
resource_child_count property (HadoopDFS Struct)
The number of children in the directory.
Syntax
fn resource_child_count(&self , ResourceIndex : i32) -> Result<i64, CloudFilesError>
Default Value
0
Remarks
The number of children in the directory.
This property reflects the number of immediate children in the directory. Always 0 for files.
The ResourceIndex parameter specifies the index of the item in the array. The size of the array is controlled by the ResourceCount property.
This property is read-only.
Data Type
i64
resource_group property (HadoopDFS Struct)
The name of the resource's group.
Syntax
fn resource_group(&self , ResourceIndex : i32) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The name of the resource's group.
This property reflects the name of the resource's group.
The ResourceIndex parameter specifies the index of the item in the array. The size of the array is controlled by the ResourceCount property.
This property is read-only.
Data Type
String
resource_modified_time property (HadoopDFS Struct)
The last modified time of the resource.
Syntax
fn resource_modified_time(&self , ResourceIndex : i32) -> Result<i64, CloudFilesError>
Default Value
0
Remarks
The last modified time of the resource.
This property reflects the last modified time of the resource, in milliseconds relative to the Unix epoch.
The ResourceIndex parameter specifies the index of the item in the array. The size of the array is controlled by the ResourceCount property.
This property is read-only.
Data Type
i64
resource_name property (HadoopDFS Struct)
The name of the resource.
Syntax
fn resource_name(&self , ResourceIndex : i32) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The name of the resource.
This property reflects the name of the resource.
The ResourceIndex parameter specifies the index of the item in the array. The size of the array is controlled by the ResourceCount property.
This property is read-only.
Data Type
String
resource_owner property (HadoopDFS Struct)
The name of the resource's owner.
Syntax
fn resource_owner(&self , ResourceIndex : i32) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The name of the resource's owner.
This property reflects the name of the resource's owner.
The ResourceIndex parameter specifies the index of the item in the array. The size of the array is controlled by the ResourceCount property.
This property is read-only.
Data Type
String
resource_path property (HadoopDFS Struct)
The full path of the resource.
Syntax
fn resource_path(&self , ResourceIndex : i32) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The full path of the resource.
This property reflects the full path of the resource. (Note that this property's value is not returned by the server, it is calculated by the struct for convenience.)
The ResourceIndex parameter specifies the index of the item in the array. The size of the array is controlled by the ResourceCount property.
This property is read-only.
Data Type
String
resource_permission property (HadoopDFS Struct)
The resource's permission bits.
Syntax
fn resource_permission(&self , ResourceIndex : i32) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The resource's permission bits.
This property reflects the resource's permission bits, represented as an octal string (e.g., 755).
The ResourceIndex parameter specifies the index of the item in the array. The size of the array is controlled by the ResourceCount property.
This property is read-only.
Data Type
String
resource_replication property (HadoopDFS Struct)
The replication factor of the file.
Syntax
fn resource_replication(&self , ResourceIndex : i32) -> Result<i32, CloudFilesError>
Default Value
0
Remarks
The replication factor of the file.
This property reflects the replication factor of the file. Always 0 for directories.
A file's replication factor determines how many copies of the file's data ("replicas") are maintained by HDFS. For example, a replication factor of 3 means that HDFS will maintain 2 replicas in addition to the original file. Thus, the minimum replication factor a file can have is 1.
The ResourceIndex parameter specifies the index of the item in the array. The size of the array is controlled by the ResourceCount property.
This property is read-only.
Data Type
i32
resource_size property (HadoopDFS Struct)
The size of the file.
Syntax
fn resource_size(&self , ResourceIndex : i32) -> Result<i64, CloudFilesError>
Default Value
0
Remarks
The size of the file.
This property reflects the size of the file, in bytes. Always 0 for directories.
Note that the actual amount of space the file consumes will be greater than this property's value if resource_replication is greater than 1. In that case, multiply the values of this property and resource_replication to obtain the total number of bytes consumed by the file and its replicas.
The ResourceIndex parameter specifies the index of the item in the array. The size of the array is controlled by the ResourceCount property.
This property is read-only.
Data Type
i64
resource_symlink_target property (HadoopDFS Struct)
The full target path of the symlink.
Syntax
fn resource_symlink_target(&self , ResourceIndex : i32) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The full target path of the symlink.
This property reflects the full target path of the symlink, if resource_type is 2 (ftSymLink) and the server returns the target path information, or empty string otherwise.
The ResourceIndex parameter specifies the index of the item in the array. The size of the array is controlled by the ResourceCount property.
This property is read-only.
Data Type
String
resource_type property (HadoopDFS Struct)
The resource type.
Syntax
fn resource_type(&self , ResourceIndex : i32) -> Result<i32, CloudFilesError>
Possible Values
0 // File
1 // Directory
2 // SymLink
Default Value
0
Remarks
The resource type.
This property reflects the resource's type. Possible values are:
| 0 (hrtFile) | A file. |
| 1 (hrtDirectory) | A directory. |
| 2 (hrtSymLink) | A symlink. |
The ResourceIndex parameter specifies the index of the item in the array. The size of the array is controlled by the ResourceCount property.
This property is read-only.
Data Type
i32
ssl_accept_server_cert_effective_date property (HadoopDFS Struct)
The date on which this certificate becomes valid.
Syntax
fn ssl_accept_server_cert_effective_date(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The date on which this certificate becomes valid. Before this date, it is not valid. The date is localized to the system's time zone. The following example illustrates the format of an encoded date:
23-Jan-2000 15:00:00.
This property is read-only.
Data Type
String
ssl_accept_server_cert_expiration_date property (HadoopDFS Struct)
The date on which the certificate expires.
Syntax
fn ssl_accept_server_cert_expiration_date(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The date on which the certificate expires. After this date, the certificate will no longer be valid. The date is localized to the system's time zone. The following example illustrates the format of an encoded date:
23-Jan-2001 15:00:00.
This property is read-only.
Data Type
String
ssl_accept_server_cert_extended_key_usage property (HadoopDFS Struct)
A comma-delimited list of extended key usage identifiers.
Syntax
fn ssl_accept_server_cert_extended_key_usage(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
A comma-delimited list of extended key usage identifiers. These are the same as ASN.1 object identifiers (OIDs).
This property is read-only.
Data Type
String
ssl_accept_server_cert_fingerprint property (HadoopDFS Struct)
The hex-encoded, 16-byte MD5 fingerprint of the certificate.
Syntax
fn ssl_accept_server_cert_fingerprint(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The hex-encoded, 16-byte MD5 fingerprint of the certificate. This property is primarily used for keys which do not have a corresponding X.509 public certificate, such as PEM keys that only contain a private key. It is commonly used for SSH keys.
The following example illustrates the format: bc:2a:72:af:fe:58:17:43:7a:5f:ba:5a:7c:90:f7:02
This property is read-only.
Data Type
String
ssl_accept_server_cert_fingerprint_sha1 property (HadoopDFS Struct)
The hex-encoded, 20-byte SHA-1 fingerprint of the certificate.
Syntax
fn ssl_accept_server_cert_fingerprint_sha1(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The hex-encoded, 20-byte SHA-1 fingerprint of the certificate. This property is primarily used for keys which do not have a corresponding X.509 public certificate, such as PEM keys that only contain a private key. It is commonly used for SSH keys.
The following example illustrates the format: 30:7b:fa:38:65:83:ff:da:b4:4e:07:3f:17:b8:a4:ed:80:be:ff:84
This property is read-only.
Data Type
String
ssl_accept_server_cert_fingerprint_sha256 property (HadoopDFS Struct)
The hex-encoded, 32-byte SHA-256 fingerprint of the certificate.
Syntax
fn ssl_accept_server_cert_fingerprint_sha256(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The hex-encoded, 32-byte SHA-256 fingerprint of the certificate. This property is primarily used for keys which do not have a corresponding X.509 public certificate, such as PEM keys that only contain a private key. It is commonly used for SSH keys.
The following example illustrates the format: 6a:80:5c:33:a9:43:ea:b0:96:12:8a:64:96:30:ef:4a:8a:96:86:ce:f4:c7:be:10:24:8e:2b:60:9e:f3:59:53
This property is read-only.
Data Type
String
ssl_accept_server_cert_issuer property (HadoopDFS Struct)
The issuer of the certificate.
Syntax
fn ssl_accept_server_cert_issuer(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The issuer of the certificate. This property contains a string representation of the name of the issuing authority for the certificate.
This property is read-only.
Data Type
String
ssl_accept_server_cert_private_key property (HadoopDFS Struct)
The private key of the certificate (if available).
Syntax
fn ssl_accept_server_cert_private_key(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The private key of the certificate (if available). The key is provided as PEM/Base64-encoded data.
NOTE: The ssl_accept_server_cert_private_key may be available but not exportable. In this case, ssl_accept_server_cert_private_key returns an empty string.
This property is read-only.
Data Type
String
ssl_accept_server_cert_private_key_available property (HadoopDFS Struct)
Whether a PrivateKey is available for the selected certificate.
Syntax
fn ssl_accept_server_cert_private_key_available(&self ) -> Result<bool, CloudFilesError>
Default Value
false
Remarks
Whether a ssl_accept_server_cert_private_key is available for the selected certificate. If ssl_accept_server_cert_private_key_available is True, the certificate may be used for authentication purposes (e.g., server authentication).
This property is read-only.
Data Type
bool
ssl_accept_server_cert_private_key_container property (HadoopDFS Struct)
The name of the PrivateKey container for the certificate (if available).
Syntax
fn ssl_accept_server_cert_private_key_container(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The name of the ssl_accept_server_cert_private_key container for the certificate (if available). This functionality is available only on Windows platforms.
This property is read-only.
Data Type
String
ssl_accept_server_cert_public_key property (HadoopDFS Struct)
The public key of the certificate.
Syntax
fn ssl_accept_server_cert_public_key(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The public key of the certificate. The key is provided as PEM/Base64-encoded data.
This property is read-only.
Data Type
String
ssl_accept_server_cert_public_key_algorithm property (HadoopDFS Struct)
The textual description of the certificate's public key algorithm.
Syntax
fn ssl_accept_server_cert_public_key_algorithm(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The textual description of the certificate's public key algorithm. The property contains either the name of the algorithm (e.g., "RSA" or "RSA_DH") or an object identifier (OID) string representing the algorithm.
This property is read-only.
Data Type
String
ssl_accept_server_cert_public_key_length property (HadoopDFS Struct)
The length of the certificate's public key (in bits).
Syntax
fn ssl_accept_server_cert_public_key_length(&self ) -> Result<i32, CloudFilesError>
Default Value
0
Remarks
The length of the certificate's public key (in bits). Common values are 512, 1024, and 2048.
This property is read-only.
Data Type
i32
ssl_accept_server_cert_serial_number property (HadoopDFS Struct)
The serial number of the certificate encoded as a string.
Syntax
fn ssl_accept_server_cert_serial_number(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The serial number of the certificate encoded as a string. The number is encoded as a series of hexadecimal digits, with each pair representing a byte of the serial number.
This property is read-only.
Data Type
String
ssl_accept_server_cert_signature_algorithm property (HadoopDFS Struct)
The text description of the certificate's signature algorithm.
Syntax
fn ssl_accept_server_cert_signature_algorithm(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The text description of the certificate's signature algorithm. The property contains either the name of the algorithm (e.g., "RSA" or "RSA_MD5RSA") or an object identifier (OID) string representing the algorithm.
This property is read-only.
Data Type
String
ssl_accept_server_cert_store property (HadoopDFS Struct)
The name of the certificate store for the client certificate.
Syntax
fn ssl_accept_server_cert_store(&self ) -> Result<Vec<u8>, CloudFilesError>
fn set_ssl_accept_server_cert_store(&self, value : Vec<u8>) -> Option<CloudFilesError> fn set_ssl_accept_server_cert_store_ref(&self, value : &[u8]) -> Option<CloudFilesError>
Default Value
"MY"
Remarks
The name of the certificate store for the client certificate.
The ssl_accept_server_cert_store_type property denotes the type of the certificate store specified by ssl_accept_server_cert_store. If the store is password-protected, specify the password in ssl_accept_server_cert_store_password.
ssl_accept_server_cert_store is used in conjunction with the ssl_accept_server_cert_subject property to specify client certificates. If ssl_accept_server_cert_store has a value, and ssl_accept_server_cert_subject or ssl_accept_server_cert_encoded is set, a search for a certificate is initiated. Please see the ssl_accept_server_cert_subject property for details.
Designations of certificate stores are platform dependent.
The following designations are the most common User and Machine certificate stores in Windows:
| MY | A certificate store holding personal certificates with their associated private keys. |
| CA | Certifying authority certificates. |
| ROOT | Root certificates. |
When the certificate store type is cstPFXFile, this property must be set to the name of the file. When the type is cstPFXBlob, the property must be set to the binary contents of a PFX file (i.e., PKCS#12 certificate store).
Data Type
Vec
ssl_accept_server_cert_store_password property (HadoopDFS Struct)
If the type of certificate store requires a password, this property is used to specify the password needed to open the certificate store.
Syntax
fn ssl_accept_server_cert_store_password(&self ) -> Result<String, CloudFilesError>
fn set_ssl_accept_server_cert_store_password(&self, value : &str) -> Option<CloudFilesError> fn set_ssl_accept_server_cert_store_password_ref(&self, value : &String) -> Option<CloudFilesError>
Default Value
""
Remarks
If the type of certificate store requires a password, this property is used to specify the password needed to open the certificate store.
Data Type
String
ssl_accept_server_cert_store_type property (HadoopDFS Struct)
The type of certificate store for this certificate.
Syntax
fn ssl_accept_server_cert_store_type(&self ) -> Result<i32, CloudFilesError>
fn set_ssl_accept_server_cert_store_type(&self, value : i32) -> Option<CloudFilesError>
Possible Values
0 // User
1 // Machine
2 // PFXFile
3 // PFXBlob
4 // JKSFile
5 // JKSBlob
6 // PEMKeyFile
7 // PEMKeyBlob
8 // PublicKeyFile
9 // PublicKeyBlob
10 // SSHPublicKeyBlob
11 // P7BFile
12 // P7BBlob
13 // SSHPublicKeyFile
14 // PPKFile
15 // PPKBlob
16 // XMLFile
17 // XMLBlob
18 // JWKFile
19 // JWKBlob
20 // SecurityKey
21 // BCFKSFile
22 // BCFKSBlob
23 // PKCS11
99 // Auto
Default Value
0
Remarks
The type of certificate store for this certificate.
The struct supports both public and private keys in a variety of formats. When the cstAuto value is used, the struct will automatically determine the type. This property can take one of the following values:
| 0 (cstUser - default) | For Windows, this specifies that the certificate store is a certificate store owned by the current user.
NOTE: This store type is not available in Java. |
| 1 (cstMachine) | For Windows, this specifies that the certificate store is a machine store.
NOTE: This store type is not available in Java. |
| 2 (cstPFXFile) | The certificate store is the name of a PFX (PKCS#12) file containing certificates. |
| 3 (cstPFXBlob) | The certificate store is a string (binary or Base64-encoded) representing a certificate store in PFX (PKCS#12) format. |
| 4 (cstJKSFile) | The certificate store is the name of a Java Key Store (JKS) file containing certificates.
NOTE: This store type is only available in Java. |
| 5 (cstJKSBlob) | The certificate store is a string (binary or Base64-encoded) representing a certificate store in Java Key Store (JKS) format.
NOTE: This store type is only available in Java. |
| 6 (cstPEMKeyFile) | The certificate store is the name of a PEM-encoded file that contains a private key and an optional certificate. |
| 7 (cstPEMKeyBlob) | The certificate store is a string (binary or Base64-encoded) that contains a private key and an optional certificate. |
| 8 (cstPublicKeyFile) | The certificate store is the name of a file that contains a PEM- or DER-encoded public key certificate. |
| 9 (cstPublicKeyBlob) | The certificate store is a string (binary or Base64-encoded) that contains a PEM- or DER-encoded public key certificate. |
| 10 (cstSSHPublicKeyBlob) | The certificate store is a string (binary or Base64-encoded) that contains an SSH-style public key. |
| 11 (cstP7BFile) | The certificate store is the name of a PKCS#7 file containing certificates. |
| 12 (cstP7BBlob) | The certificate store is a string (binary) representing a certificate store in PKCS#7 format. |
| 13 (cstSSHPublicKeyFile) | The certificate store is the name of a file that contains an SSH-style public key. |
| 14 (cstPPKFile) | The certificate store is the name of a file that contains a PPK (PuTTY Private Key). |
| 15 (cstPPKBlob) | The certificate store is a string (binary) that contains a PPK (PuTTY Private Key). |
| 16 (cstXMLFile) | The certificate store is the name of a file that contains a certificate in XML format. |
| 17 (cstXMLBlob) | The certificate store is a string that contains a certificate in XML format. |
| 18 (cstJWKFile) | The certificate store is the name of a file that contains a JWK (JSON Web Key). |
| 19 (cstJWKBlob) | The certificate store is a string that contains a JWK (JSON Web Key). |
| 21 (cstBCFKSFile) | The certificate store is the name of a file that contains a BCFKS (Bouncy Castle FIPS Key Store).
NOTE: This store type is only available in Java and .NET. |
| 22 (cstBCFKSBlob) | The certificate store is a string (binary or Base64-encoded) representing a certificate store in BCFKS (Bouncy Castle FIPS Key Store) format.
NOTE: This store type is only available in Java and .NET. |
| 23 (cstPKCS11) | The certificate is present on a physical security key accessible via a PKCS#11 interface.
To use a security key, the necessary data must first be collected using the CertMgr struct. The list_store_certificates method may be called after setting cert_store_type to cstPKCS11, cert_store_password to the PIN, and cert_store to the full path of the PKCS#11 DLL. The certificate information returned in the on_cert_list event's CertEncoded parameter may be saved for later use. When using a certificate, pass the previously saved security key information as the ssl_accept_server_cert_store and set ssl_accept_server_cert_store_password to the PIN. Code Example. SSH Authentication with Security Key:
|
| 99 (cstAuto) | The store type is automatically detected from the input data. This setting may be used with both public and private keys and can detect any of the supported formats automatically. |
Data Type
i32
ssl_accept_server_cert_subject_alt_names property (HadoopDFS Struct)
Comma-separated lists of alternative subject names for the certificate.
Syntax
fn ssl_accept_server_cert_subject_alt_names(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
Comma-separated lists of alternative subject names for the certificate.
This property is read-only.
Data Type
String
ssl_accept_server_cert_thumbprint_md5 property (HadoopDFS Struct)
The MD5 hash of the certificate.
Syntax
fn ssl_accept_server_cert_thumbprint_md5(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The MD5 hash of the certificate. It is primarily used for X.509 certificates. If the hash does not already exist, it is automatically computed.
This property is read-only.
Data Type
String
ssl_accept_server_cert_thumbprint_sha1 property (HadoopDFS Struct)
The SHA-1 hash of the certificate.
Syntax
fn ssl_accept_server_cert_thumbprint_sha1(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The SHA-1 hash of the certificate. It is primarily used for X.509 certificates. If the hash does not already exist, it is automatically computed.
This property is read-only.
Data Type
String
ssl_accept_server_cert_thumbprint_sha256 property (HadoopDFS Struct)
The SHA-256 hash of the certificate.
Syntax
fn ssl_accept_server_cert_thumbprint_sha256(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The SHA-256 hash of the certificate. It is primarily used for X.509 certificates. If the hash does not already exist, it is automatically computed.
This property is read-only.
Data Type
String
ssl_accept_server_cert_usage property (HadoopDFS Struct)
The text description of UsageFlags .
Syntax
fn ssl_accept_server_cert_usage(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The text description of ssl_accept_server_cert_usage_flags.
This value will be one or more of the following strings and will be separated by commas:
- Digital Signature
- Non-Repudiation
- Key Encipherment
- Data Encipherment
- Key Agreement
- Certificate Signing
- CRL Signing
- Encipher Only
If the provider is OpenSSL, the value is a comma-separated list of X.509 certificate extension names.
This property is read-only.
Data Type
String
ssl_accept_server_cert_usage_flags property (HadoopDFS Struct)
The flags that show intended use for the certificate.
Syntax
fn ssl_accept_server_cert_usage_flags(&self ) -> Result<i32, CloudFilesError>
Default Value
0
Remarks
The flags that show intended use for the certificate. The value of ssl_accept_server_cert_usage_flags is a combination of the following flags:
| 0x80 | Digital Signature |
| 0x40 | Non-Repudiation |
| 0x20 | Key Encipherment |
| 0x10 | Data Encipherment |
| 0x08 | Key Agreement |
| 0x04 | Certificate Signing |
| 0x02 | CRL Signing |
| 0x01 | Encipher Only |
Please see the ssl_accept_server_cert_usage property for a text representation of ssl_accept_server_cert_usage_flags.
This functionality currently is not available when the provider is OpenSSL.
This property is read-only.
Data Type
i32
ssl_accept_server_cert_version property (HadoopDFS Struct)
The certificate's version number.
Syntax
fn ssl_accept_server_cert_version(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The certificate's version number. The possible values are the strings "V1", "V2", and "V3".
This property is read-only.
Data Type
String
ssl_accept_server_cert_subject property (HadoopDFS Struct)
The subject of the certificate used for client authentication.
Syntax
fn ssl_accept_server_cert_subject(&self ) -> Result<String, CloudFilesError>
fn set_ssl_accept_server_cert_subject(&self, value : &str) -> Option<CloudFilesError> fn set_ssl_accept_server_cert_subject_ref(&self, value : &String) -> Option<CloudFilesError>
Default Value
""
Remarks
The subject of the certificate used for client authentication.
This property must be set after all other certificate properties are set. When this property is set, a search is performed in the current certificate store to locate a certificate with a matching subject.
If a matching certificate is found, the property is set to the full subject of the matching certificate.
If an exact match is not found, the store is searched for subjects containing the value of the property.
If a match is still not found, the property is set to an empty string, and no certificate is selected.
The special value "*" picks a random certificate in the certificate store.
The certificate subject is a comma-separated list of distinguished name fields and values. For instance, "CN=www.server.com, OU=test, C=US, E=example@email.com". Common fields and their meanings are as follows:
| Field | Meaning |
| CN | Common Name. This is commonly a hostname like www.server.com. |
| O | Organization |
| OU | Organizational Unit |
| L | Locality |
| S | State |
| C | Country |
| E | Email Address |
If a field value contains a comma, it must be quoted.
Data Type
String
ssl_accept_server_cert_encoded property (HadoopDFS Struct)
The certificate (PEM/Base64 encoded).
Syntax
fn ssl_accept_server_cert_encoded(&self ) -> Result<Vec<u8>, CloudFilesError>
fn set_ssl_accept_server_cert_encoded(&self, value : Vec<u8>) -> Option<CloudFilesError> fn set_ssl_accept_server_cert_encoded_ref(&self, value : &[u8]) -> Option<CloudFilesError>
Default Value
""
Remarks
The certificate (PEM/Base64 encoded). This property is used to assign a specific certificate. The ssl_accept_server_cert_store and ssl_accept_server_cert_subject properties also may be used to specify a certificate.
When ssl_accept_server_cert_encoded is set, a search is initiated in the current ssl_accept_server_cert_store for the private key of the certificate. If the key is found, ssl_accept_server_cert_subject is updated to reflect the full subject of the selected certificate; otherwise, ssl_accept_server_cert_subject is set to an empty string.
Data Type
Vec
ssl_cert_effective_date property (HadoopDFS Struct)
The date on which this certificate becomes valid.
Syntax
fn ssl_cert_effective_date(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The date on which this certificate becomes valid. Before this date, it is not valid. The date is localized to the system's time zone. The following example illustrates the format of an encoded date:
23-Jan-2000 15:00:00.
This property is read-only.
Data Type
String
ssl_cert_expiration_date property (HadoopDFS Struct)
The date on which the certificate expires.
Syntax
fn ssl_cert_expiration_date(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The date on which the certificate expires. After this date, the certificate will no longer be valid. The date is localized to the system's time zone. The following example illustrates the format of an encoded date:
23-Jan-2001 15:00:00.
This property is read-only.
Data Type
String
ssl_cert_extended_key_usage property (HadoopDFS Struct)
A comma-delimited list of extended key usage identifiers.
Syntax
fn ssl_cert_extended_key_usage(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
A comma-delimited list of extended key usage identifiers. These are the same as ASN.1 object identifiers (OIDs).
This property is read-only.
Data Type
String
ssl_cert_fingerprint property (HadoopDFS Struct)
The hex-encoded, 16-byte MD5 fingerprint of the certificate.
Syntax
fn ssl_cert_fingerprint(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The hex-encoded, 16-byte MD5 fingerprint of the certificate. This property is primarily used for keys which do not have a corresponding X.509 public certificate, such as PEM keys that only contain a private key. It is commonly used for SSH keys.
The following example illustrates the format: bc:2a:72:af:fe:58:17:43:7a:5f:ba:5a:7c:90:f7:02
This property is read-only.
Data Type
String
ssl_cert_fingerprint_sha1 property (HadoopDFS Struct)
The hex-encoded, 20-byte SHA-1 fingerprint of the certificate.
Syntax
fn ssl_cert_fingerprint_sha1(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The hex-encoded, 20-byte SHA-1 fingerprint of the certificate. This property is primarily used for keys which do not have a corresponding X.509 public certificate, such as PEM keys that only contain a private key. It is commonly used for SSH keys.
The following example illustrates the format: 30:7b:fa:38:65:83:ff:da:b4:4e:07:3f:17:b8:a4:ed:80:be:ff:84
This property is read-only.
Data Type
String
ssl_cert_fingerprint_sha256 property (HadoopDFS Struct)
The hex-encoded, 32-byte SHA-256 fingerprint of the certificate.
Syntax
fn ssl_cert_fingerprint_sha256(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The hex-encoded, 32-byte SHA-256 fingerprint of the certificate. This property is primarily used for keys which do not have a corresponding X.509 public certificate, such as PEM keys that only contain a private key. It is commonly used for SSH keys.
The following example illustrates the format: 6a:80:5c:33:a9:43:ea:b0:96:12:8a:64:96:30:ef:4a:8a:96:86:ce:f4:c7:be:10:24:8e:2b:60:9e:f3:59:53
This property is read-only.
Data Type
String
ssl_cert_issuer property (HadoopDFS Struct)
The issuer of the certificate.
Syntax
fn ssl_cert_issuer(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The issuer of the certificate. This property contains a string representation of the name of the issuing authority for the certificate.
This property is read-only.
Data Type
String
ssl_cert_private_key property (HadoopDFS Struct)
The private key of the certificate (if available).
Syntax
fn ssl_cert_private_key(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The private key of the certificate (if available). The key is provided as PEM/Base64-encoded data.
NOTE: The ssl_cert_private_key may be available but not exportable. In this case, ssl_cert_private_key returns an empty string.
This property is read-only.
Data Type
String
ssl_cert_private_key_available property (HadoopDFS Struct)
Whether a PrivateKey is available for the selected certificate.
Syntax
fn ssl_cert_private_key_available(&self ) -> Result<bool, CloudFilesError>
Default Value
false
Remarks
Whether a ssl_cert_private_key is available for the selected certificate. If ssl_cert_private_key_available is True, the certificate may be used for authentication purposes (e.g., server authentication).
This property is read-only.
Data Type
bool
ssl_cert_private_key_container property (HadoopDFS Struct)
The name of the PrivateKey container for the certificate (if available).
Syntax
fn ssl_cert_private_key_container(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The name of the ssl_cert_private_key container for the certificate (if available). This functionality is available only on Windows platforms.
This property is read-only.
Data Type
String
ssl_cert_public_key property (HadoopDFS Struct)
The public key of the certificate.
Syntax
fn ssl_cert_public_key(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The public key of the certificate. The key is provided as PEM/Base64-encoded data.
This property is read-only.
Data Type
String
ssl_cert_public_key_algorithm property (HadoopDFS Struct)
The textual description of the certificate's public key algorithm.
Syntax
fn ssl_cert_public_key_algorithm(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The textual description of the certificate's public key algorithm. The property contains either the name of the algorithm (e.g., "RSA" or "RSA_DH") or an object identifier (OID) string representing the algorithm.
This property is read-only.
Data Type
String
ssl_cert_public_key_length property (HadoopDFS Struct)
The length of the certificate's public key (in bits).
Syntax
fn ssl_cert_public_key_length(&self ) -> Result<i32, CloudFilesError>
Default Value
0
Remarks
The length of the certificate's public key (in bits). Common values are 512, 1024, and 2048.
This property is read-only.
Data Type
i32
ssl_cert_serial_number property (HadoopDFS Struct)
The serial number of the certificate encoded as a string.
Syntax
fn ssl_cert_serial_number(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The serial number of the certificate encoded as a string. The number is encoded as a series of hexadecimal digits, with each pair representing a byte of the serial number.
This property is read-only.
Data Type
String
ssl_cert_signature_algorithm property (HadoopDFS Struct)
The text description of the certificate's signature algorithm.
Syntax
fn ssl_cert_signature_algorithm(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The text description of the certificate's signature algorithm. The property contains either the name of the algorithm (e.g., "RSA" or "RSA_MD5RSA") or an object identifier (OID) string representing the algorithm.
This property is read-only.
Data Type
String
ssl_cert_store property (HadoopDFS Struct)
The name of the certificate store for the client certificate.
Syntax
fn ssl_cert_store(&self ) -> Result<Vec<u8>, CloudFilesError>
fn set_ssl_cert_store(&self, value : Vec<u8>) -> Option<CloudFilesError> fn set_ssl_cert_store_ref(&self, value : &[u8]) -> Option<CloudFilesError>
Default Value
"MY"
Remarks
The name of the certificate store for the client certificate.
The ssl_cert_store_type property denotes the type of the certificate store specified by ssl_cert_store. If the store is password-protected, specify the password in ssl_cert_store_password.
ssl_cert_store is used in conjunction with the ssl_cert_subject property to specify client certificates. If ssl_cert_store has a value, and ssl_cert_subject or ssl_cert_encoded is set, a search for a certificate is initiated. Please see the ssl_cert_subject property for details.
Designations of certificate stores are platform dependent.
The following designations are the most common User and Machine certificate stores in Windows:
| MY | A certificate store holding personal certificates with their associated private keys. |
| CA | Certifying authority certificates. |
| ROOT | Root certificates. |
When the certificate store type is cstPFXFile, this property must be set to the name of the file. When the type is cstPFXBlob, the property must be set to the binary contents of a PFX file (i.e., PKCS#12 certificate store).
Data Type
Vec
ssl_cert_store_password property (HadoopDFS Struct)
If the type of certificate store requires a password, this property is used to specify the password needed to open the certificate store.
Syntax
fn ssl_cert_store_password(&self ) -> Result<String, CloudFilesError>
fn set_ssl_cert_store_password(&self, value : &str) -> Option<CloudFilesError> fn set_ssl_cert_store_password_ref(&self, value : &String) -> Option<CloudFilesError>
Default Value
""
Remarks
If the type of certificate store requires a password, this property is used to specify the password needed to open the certificate store.
Data Type
String
ssl_cert_store_type property (HadoopDFS Struct)
The type of certificate store for this certificate.
Syntax
fn ssl_cert_store_type(&self ) -> Result<i32, CloudFilesError>
fn set_ssl_cert_store_type(&self, value : i32) -> Option<CloudFilesError>
Possible Values
0 // User
1 // Machine
2 // PFXFile
3 // PFXBlob
4 // JKSFile
5 // JKSBlob
6 // PEMKeyFile
7 // PEMKeyBlob
8 // PublicKeyFile
9 // PublicKeyBlob
10 // SSHPublicKeyBlob
11 // P7BFile
12 // P7BBlob
13 // SSHPublicKeyFile
14 // PPKFile
15 // PPKBlob
16 // XMLFile
17 // XMLBlob
18 // JWKFile
19 // JWKBlob
20 // SecurityKey
21 // BCFKSFile
22 // BCFKSBlob
23 // PKCS11
99 // Auto
Default Value
0
Remarks
The type of certificate store for this certificate.
The struct supports both public and private keys in a variety of formats. When the cstAuto value is used, the struct will automatically determine the type. This property can take one of the following values:
| 0 (cstUser - default) | For Windows, this specifies that the certificate store is a certificate store owned by the current user.
NOTE: This store type is not available in Java. |
| 1 (cstMachine) | For Windows, this specifies that the certificate store is a machine store.
NOTE: This store type is not available in Java. |
| 2 (cstPFXFile) | The certificate store is the name of a PFX (PKCS#12) file containing certificates. |
| 3 (cstPFXBlob) | The certificate store is a string (binary or Base64-encoded) representing a certificate store in PFX (PKCS#12) format. |
| 4 (cstJKSFile) | The certificate store is the name of a Java Key Store (JKS) file containing certificates.
NOTE: This store type is only available in Java. |
| 5 (cstJKSBlob) | The certificate store is a string (binary or Base64-encoded) representing a certificate store in Java Key Store (JKS) format.
NOTE: This store type is only available in Java. |
| 6 (cstPEMKeyFile) | The certificate store is the name of a PEM-encoded file that contains a private key and an optional certificate. |
| 7 (cstPEMKeyBlob) | The certificate store is a string (binary or Base64-encoded) that contains a private key and an optional certificate. |
| 8 (cstPublicKeyFile) | The certificate store is the name of a file that contains a PEM- or DER-encoded public key certificate. |
| 9 (cstPublicKeyBlob) | The certificate store is a string (binary or Base64-encoded) that contains a PEM- or DER-encoded public key certificate. |
| 10 (cstSSHPublicKeyBlob) | The certificate store is a string (binary or Base64-encoded) that contains an SSH-style public key. |
| 11 (cstP7BFile) | The certificate store is the name of a PKCS#7 file containing certificates. |
| 12 (cstP7BBlob) | The certificate store is a string (binary) representing a certificate store in PKCS#7 format. |
| 13 (cstSSHPublicKeyFile) | The certificate store is the name of a file that contains an SSH-style public key. |
| 14 (cstPPKFile) | The certificate store is the name of a file that contains a PPK (PuTTY Private Key). |
| 15 (cstPPKBlob) | The certificate store is a string (binary) that contains a PPK (PuTTY Private Key). |
| 16 (cstXMLFile) | The certificate store is the name of a file that contains a certificate in XML format. |
| 17 (cstXMLBlob) | The certificate store is a string that contains a certificate in XML format. |
| 18 (cstJWKFile) | The certificate store is the name of a file that contains a JWK (JSON Web Key). |
| 19 (cstJWKBlob) | The certificate store is a string that contains a JWK (JSON Web Key). |
| 21 (cstBCFKSFile) | The certificate store is the name of a file that contains a BCFKS (Bouncy Castle FIPS Key Store).
NOTE: This store type is only available in Java and .NET. |
| 22 (cstBCFKSBlob) | The certificate store is a string (binary or Base64-encoded) representing a certificate store in BCFKS (Bouncy Castle FIPS Key Store) format.
NOTE: This store type is only available in Java and .NET. |
| 23 (cstPKCS11) | The certificate is present on a physical security key accessible via a PKCS#11 interface.
To use a security key, the necessary data must first be collected using the CertMgr struct. The list_store_certificates method may be called after setting cert_store_type to cstPKCS11, cert_store_password to the PIN, and cert_store to the full path of the PKCS#11 DLL. The certificate information returned in the on_cert_list event's CertEncoded parameter may be saved for later use. When using a certificate, pass the previously saved security key information as the ssl_cert_store and set ssl_cert_store_password to the PIN. Code Example. SSH Authentication with Security Key:
|
| 99 (cstAuto) | The store type is automatically detected from the input data. This setting may be used with both public and private keys and can detect any of the supported formats automatically. |
Data Type
i32
ssl_cert_subject_alt_names property (HadoopDFS Struct)
Comma-separated lists of alternative subject names for the certificate.
Syntax
fn ssl_cert_subject_alt_names(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
Comma-separated lists of alternative subject names for the certificate.
This property is read-only.
Data Type
String
ssl_cert_thumbprint_md5 property (HadoopDFS Struct)
The MD5 hash of the certificate.
Syntax
fn ssl_cert_thumbprint_md5(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The MD5 hash of the certificate. It is primarily used for X.509 certificates. If the hash does not already exist, it is automatically computed.
This property is read-only.
Data Type
String
ssl_cert_thumbprint_sha1 property (HadoopDFS Struct)
The SHA-1 hash of the certificate.
Syntax
fn ssl_cert_thumbprint_sha1(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The SHA-1 hash of the certificate. It is primarily used for X.509 certificates. If the hash does not already exist, it is automatically computed.
This property is read-only.
Data Type
String
ssl_cert_thumbprint_sha256 property (HadoopDFS Struct)
The SHA-256 hash of the certificate.
Syntax
fn ssl_cert_thumbprint_sha256(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The SHA-256 hash of the certificate. It is primarily used for X.509 certificates. If the hash does not already exist, it is automatically computed.
This property is read-only.
Data Type
String
ssl_cert_usage property (HadoopDFS Struct)
The text description of UsageFlags .
Syntax
fn ssl_cert_usage(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The text description of ssl_cert_usage_flags.
This value will be one or more of the following strings and will be separated by commas:
- Digital Signature
- Non-Repudiation
- Key Encipherment
- Data Encipherment
- Key Agreement
- Certificate Signing
- CRL Signing
- Encipher Only
If the provider is OpenSSL, the value is a comma-separated list of X.509 certificate extension names.
This property is read-only.
Data Type
String
ssl_cert_usage_flags property (HadoopDFS Struct)
The flags that show intended use for the certificate.
Syntax
fn ssl_cert_usage_flags(&self ) -> Result<i32, CloudFilesError>
Default Value
0
Remarks
The flags that show intended use for the certificate. The value of ssl_cert_usage_flags is a combination of the following flags:
| 0x80 | Digital Signature |
| 0x40 | Non-Repudiation |
| 0x20 | Key Encipherment |
| 0x10 | Data Encipherment |
| 0x08 | Key Agreement |
| 0x04 | Certificate Signing |
| 0x02 | CRL Signing |
| 0x01 | Encipher Only |
Please see the ssl_cert_usage property for a text representation of ssl_cert_usage_flags.
This functionality currently is not available when the provider is OpenSSL.
This property is read-only.
Data Type
i32
ssl_cert_version property (HadoopDFS Struct)
The certificate's version number.
Syntax
fn ssl_cert_version(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The certificate's version number. The possible values are the strings "V1", "V2", and "V3".
This property is read-only.
Data Type
String
ssl_cert_subject property (HadoopDFS Struct)
The subject of the certificate used for client authentication.
Syntax
fn ssl_cert_subject(&self ) -> Result<String, CloudFilesError>
fn set_ssl_cert_subject(&self, value : &str) -> Option<CloudFilesError> fn set_ssl_cert_subject_ref(&self, value : &String) -> Option<CloudFilesError>
Default Value
""
Remarks
The subject of the certificate used for client authentication.
This property must be set after all other certificate properties are set. When this property is set, a search is performed in the current certificate store to locate a certificate with a matching subject.
If a matching certificate is found, the property is set to the full subject of the matching certificate.
If an exact match is not found, the store is searched for subjects containing the value of the property.
If a match is still not found, the property is set to an empty string, and no certificate is selected.
The special value "*" picks a random certificate in the certificate store.
The certificate subject is a comma-separated list of distinguished name fields and values. For instance, "CN=www.server.com, OU=test, C=US, E=example@email.com". Common fields and their meanings are as follows:
| Field | Meaning |
| CN | Common Name. This is commonly a hostname like www.server.com. |
| O | Organization |
| OU | Organizational Unit |
| L | Locality |
| S | State |
| C | Country |
| E | Email Address |
If a field value contains a comma, it must be quoted.
Data Type
String
ssl_cert_encoded property (HadoopDFS Struct)
The certificate (PEM/Base64 encoded).
Syntax
fn ssl_cert_encoded(&self ) -> Result<Vec<u8>, CloudFilesError>
fn set_ssl_cert_encoded(&self, value : Vec<u8>) -> Option<CloudFilesError> fn set_ssl_cert_encoded_ref(&self, value : &[u8]) -> Option<CloudFilesError>
Default Value
""
Remarks
The certificate (PEM/Base64 encoded). This property is used to assign a specific certificate. The ssl_cert_store and ssl_cert_subject properties also may be used to specify a certificate.
When ssl_cert_encoded is set, a search is initiated in the current ssl_cert_store for the private key of the certificate. If the key is found, ssl_cert_subject is updated to reflect the full subject of the selected certificate; otherwise, ssl_cert_subject is set to an empty string.
Data Type
Vec
ssl_provider property (HadoopDFS Struct)
The Secure Sockets Layer/Transport Layer Security (SSL/TLS) implementation to use.
Syntax
fn ssl_provider(&self ) -> Result<i32, CloudFilesError>
fn set_ssl_provider(&self, value : i32) -> Option<CloudFilesError>
Possible Values
0 // Automatic
1 // Platform
2 // Internal
Default Value
0
Remarks
This property specifies the SSL/TLS implementation to use. In most cases the default value of 0 (Automatic) is recommended and should not be changed. When set to 0 (Automatic), the struct will select whether to use the platform implementation or the internal implementation depending on the operating system as well as the TLS version being used.
Possible values are as follows:
| 0 (sslpAutomatic - default) | Automatically selects the appropriate implementation. |
| 1 (sslpPlatform) | Uses the platform/system implementation. |
| 2 (sslpInternal) | Uses the internal implementation. |
In most cases using the default value (Automatic) is recommended. The struct will select a provider depending on the current platform.
When Automatic is selected, on Windows, the struct will use the platform implementation. On Linux/macOS, the struct will use the internal implementation. When TLS 1.3 is enabled via SSLEnabledProtocols, the internal implementation is used on all platforms.
Data Type
i32
ssl_server_cert_effective_date property (HadoopDFS Struct)
The date on which this certificate becomes valid.
Syntax
fn ssl_server_cert_effective_date(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The date on which this certificate becomes valid. Before this date, it is not valid. The date is localized to the system's time zone. The following example illustrates the format of an encoded date:
23-Jan-2000 15:00:00.
This property is read-only.
Data Type
String
ssl_server_cert_expiration_date property (HadoopDFS Struct)
The date on which the certificate expires.
Syntax
fn ssl_server_cert_expiration_date(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The date on which the certificate expires. After this date, the certificate will no longer be valid. The date is localized to the system's time zone. The following example illustrates the format of an encoded date:
23-Jan-2001 15:00:00.
This property is read-only.
Data Type
String
ssl_server_cert_extended_key_usage property (HadoopDFS Struct)
A comma-delimited list of extended key usage identifiers.
Syntax
fn ssl_server_cert_extended_key_usage(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
A comma-delimited list of extended key usage identifiers. These are the same as ASN.1 object identifiers (OIDs).
This property is read-only.
Data Type
String
ssl_server_cert_fingerprint property (HadoopDFS Struct)
The hex-encoded, 16-byte MD5 fingerprint of the certificate.
Syntax
fn ssl_server_cert_fingerprint(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The hex-encoded, 16-byte MD5 fingerprint of the certificate. This property is primarily used for keys which do not have a corresponding X.509 public certificate, such as PEM keys that only contain a private key. It is commonly used for SSH keys.
The following example illustrates the format: bc:2a:72:af:fe:58:17:43:7a:5f:ba:5a:7c:90:f7:02
This property is read-only.
Data Type
String
ssl_server_cert_fingerprint_sha1 property (HadoopDFS Struct)
The hex-encoded, 20-byte SHA-1 fingerprint of the certificate.
Syntax
fn ssl_server_cert_fingerprint_sha1(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The hex-encoded, 20-byte SHA-1 fingerprint of the certificate. This property is primarily used for keys which do not have a corresponding X.509 public certificate, such as PEM keys that only contain a private key. It is commonly used for SSH keys.
The following example illustrates the format: 30:7b:fa:38:65:83:ff:da:b4:4e:07:3f:17:b8:a4:ed:80:be:ff:84
This property is read-only.
Data Type
String
ssl_server_cert_fingerprint_sha256 property (HadoopDFS Struct)
The hex-encoded, 32-byte SHA-256 fingerprint of the certificate.
Syntax
fn ssl_server_cert_fingerprint_sha256(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The hex-encoded, 32-byte SHA-256 fingerprint of the certificate. This property is primarily used for keys which do not have a corresponding X.509 public certificate, such as PEM keys that only contain a private key. It is commonly used for SSH keys.
The following example illustrates the format: 6a:80:5c:33:a9:43:ea:b0:96:12:8a:64:96:30:ef:4a:8a:96:86:ce:f4:c7:be:10:24:8e:2b:60:9e:f3:59:53
This property is read-only.
Data Type
String
ssl_server_cert_issuer property (HadoopDFS Struct)
The issuer of the certificate.
Syntax
fn ssl_server_cert_issuer(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The issuer of the certificate. This property contains a string representation of the name of the issuing authority for the certificate.
This property is read-only.
Data Type
String
ssl_server_cert_private_key property (HadoopDFS Struct)
The private key of the certificate (if available).
Syntax
fn ssl_server_cert_private_key(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The private key of the certificate (if available). The key is provided as PEM/Base64-encoded data.
NOTE: The ssl_server_cert_private_key may be available but not exportable. In this case, ssl_server_cert_private_key returns an empty string.
This property is read-only.
Data Type
String
ssl_server_cert_private_key_available property (HadoopDFS Struct)
Whether a PrivateKey is available for the selected certificate.
Syntax
fn ssl_server_cert_private_key_available(&self ) -> Result<bool, CloudFilesError>
Default Value
false
Remarks
Whether a ssl_server_cert_private_key is available for the selected certificate. If ssl_server_cert_private_key_available is True, the certificate may be used for authentication purposes (e.g., server authentication).
This property is read-only.
Data Type
bool
ssl_server_cert_private_key_container property (HadoopDFS Struct)
The name of the PrivateKey container for the certificate (if available).
Syntax
fn ssl_server_cert_private_key_container(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The name of the ssl_server_cert_private_key container for the certificate (if available). This functionality is available only on Windows platforms.
This property is read-only.
Data Type
String
ssl_server_cert_public_key property (HadoopDFS Struct)
The public key of the certificate.
Syntax
fn ssl_server_cert_public_key(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The public key of the certificate. The key is provided as PEM/Base64-encoded data.
This property is read-only.
Data Type
String
ssl_server_cert_public_key_algorithm property (HadoopDFS Struct)
The textual description of the certificate's public key algorithm.
Syntax
fn ssl_server_cert_public_key_algorithm(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The textual description of the certificate's public key algorithm. The property contains either the name of the algorithm (e.g., "RSA" or "RSA_DH") or an object identifier (OID) string representing the algorithm.
This property is read-only.
Data Type
String
ssl_server_cert_public_key_length property (HadoopDFS Struct)
The length of the certificate's public key (in bits).
Syntax
fn ssl_server_cert_public_key_length(&self ) -> Result<i32, CloudFilesError>
Default Value
0
Remarks
The length of the certificate's public key (in bits). Common values are 512, 1024, and 2048.
This property is read-only.
Data Type
i32
ssl_server_cert_serial_number property (HadoopDFS Struct)
The serial number of the certificate encoded as a string.
Syntax
fn ssl_server_cert_serial_number(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The serial number of the certificate encoded as a string. The number is encoded as a series of hexadecimal digits, with each pair representing a byte of the serial number.
This property is read-only.
Data Type
String
ssl_server_cert_signature_algorithm property (HadoopDFS Struct)
The text description of the certificate's signature algorithm.
Syntax
fn ssl_server_cert_signature_algorithm(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The text description of the certificate's signature algorithm. The property contains either the name of the algorithm (e.g., "RSA" or "RSA_MD5RSA") or an object identifier (OID) string representing the algorithm.
This property is read-only.
Data Type
String
ssl_server_cert_store property (HadoopDFS Struct)
The name of the certificate store for the client certificate.
Syntax
fn ssl_server_cert_store(&self ) -> Result<Vec<u8>, CloudFilesError>
Default Value
"MY"
Remarks
The name of the certificate store for the client certificate.
The ssl_server_cert_store_type property denotes the type of the certificate store specified by ssl_server_cert_store. If the store is password-protected, specify the password in ssl_server_cert_store_password.
ssl_server_cert_store is used in conjunction with the ssl_server_cert_subject property to specify client certificates. If ssl_server_cert_store has a value, and ssl_server_cert_subject or ssl_server_cert_encoded is set, a search for a certificate is initiated. Please see the ssl_server_cert_subject property for details.
Designations of certificate stores are platform dependent.
The following designations are the most common User and Machine certificate stores in Windows:
| MY | A certificate store holding personal certificates with their associated private keys. |
| CA | Certifying authority certificates. |
| ROOT | Root certificates. |
When the certificate store type is cstPFXFile, this property must be set to the name of the file. When the type is cstPFXBlob, the property must be set to the binary contents of a PFX file (i.e., PKCS#12 certificate store).
This property is read-only.
Data Type
Vec
ssl_server_cert_store_password property (HadoopDFS Struct)
If the type of certificate store requires a password, this property is used to specify the password needed to open the certificate store.
Syntax
fn ssl_server_cert_store_password(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
If the type of certificate store requires a password, this property is used to specify the password needed to open the certificate store.
This property is read-only.
Data Type
String
ssl_server_cert_store_type property (HadoopDFS Struct)
The type of certificate store for this certificate.
Syntax
fn ssl_server_cert_store_type(&self ) -> Result<i32, CloudFilesError>
Possible Values
0 // User
1 // Machine
2 // PFXFile
3 // PFXBlob
4 // JKSFile
5 // JKSBlob
6 // PEMKeyFile
7 // PEMKeyBlob
8 // PublicKeyFile
9 // PublicKeyBlob
10 // SSHPublicKeyBlob
11 // P7BFile
12 // P7BBlob
13 // SSHPublicKeyFile
14 // PPKFile
15 // PPKBlob
16 // XMLFile
17 // XMLBlob
18 // JWKFile
19 // JWKBlob
20 // SecurityKey
21 // BCFKSFile
22 // BCFKSBlob
23 // PKCS11
99 // Auto
Default Value
0
Remarks
The type of certificate store for this certificate.
The struct supports both public and private keys in a variety of formats. When the cstAuto value is used, the struct will automatically determine the type. This property can take one of the following values:
| 0 (cstUser - default) | For Windows, this specifies that the certificate store is a certificate store owned by the current user.
NOTE: This store type is not available in Java. |
| 1 (cstMachine) | For Windows, this specifies that the certificate store is a machine store.
NOTE: This store type is not available in Java. |
| 2 (cstPFXFile) | The certificate store is the name of a PFX (PKCS#12) file containing certificates. |
| 3 (cstPFXBlob) | The certificate store is a string (binary or Base64-encoded) representing a certificate store in PFX (PKCS#12) format. |
| 4 (cstJKSFile) | The certificate store is the name of a Java Key Store (JKS) file containing certificates.
NOTE: This store type is only available in Java. |
| 5 (cstJKSBlob) | The certificate store is a string (binary or Base64-encoded) representing a certificate store in Java Key Store (JKS) format.
NOTE: This store type is only available in Java. |
| 6 (cstPEMKeyFile) | The certificate store is the name of a PEM-encoded file that contains a private key and an optional certificate. |
| 7 (cstPEMKeyBlob) | The certificate store is a string (binary or Base64-encoded) that contains a private key and an optional certificate. |
| 8 (cstPublicKeyFile) | The certificate store is the name of a file that contains a PEM- or DER-encoded public key certificate. |
| 9 (cstPublicKeyBlob) | The certificate store is a string (binary or Base64-encoded) that contains a PEM- or DER-encoded public key certificate. |
| 10 (cstSSHPublicKeyBlob) | The certificate store is a string (binary or Base64-encoded) that contains an SSH-style public key. |
| 11 (cstP7BFile) | The certificate store is the name of a PKCS#7 file containing certificates. |
| 12 (cstP7BBlob) | The certificate store is a string (binary) representing a certificate store in PKCS#7 format. |
| 13 (cstSSHPublicKeyFile) | The certificate store is the name of a file that contains an SSH-style public key. |
| 14 (cstPPKFile) | The certificate store is the name of a file that contains a PPK (PuTTY Private Key). |
| 15 (cstPPKBlob) | The certificate store is a string (binary) that contains a PPK (PuTTY Private Key). |
| 16 (cstXMLFile) | The certificate store is the name of a file that contains a certificate in XML format. |
| 17 (cstXMLBlob) | The certificate store is a string that contains a certificate in XML format. |
| 18 (cstJWKFile) | The certificate store is the name of a file that contains a JWK (JSON Web Key). |
| 19 (cstJWKBlob) | The certificate store is a string that contains a JWK (JSON Web Key). |
| 21 (cstBCFKSFile) | The certificate store is the name of a file that contains a BCFKS (Bouncy Castle FIPS Key Store).
NOTE: This store type is only available in Java and .NET. |
| 22 (cstBCFKSBlob) | The certificate store is a string (binary or Base64-encoded) representing a certificate store in BCFKS (Bouncy Castle FIPS Key Store) format.
NOTE: This store type is only available in Java and .NET. |
| 23 (cstPKCS11) | The certificate is present on a physical security key accessible via a PKCS#11 interface.
To use a security key, the necessary data must first be collected using the CertMgr struct. The list_store_certificates method may be called after setting cert_store_type to cstPKCS11, cert_store_password to the PIN, and cert_store to the full path of the PKCS#11 DLL. The certificate information returned in the on_cert_list event's CertEncoded parameter may be saved for later use. When using a certificate, pass the previously saved security key information as the ssl_server_cert_store and set ssl_server_cert_store_password to the PIN. Code Example. SSH Authentication with Security Key:
|
| 99 (cstAuto) | The store type is automatically detected from the input data. This setting may be used with both public and private keys and can detect any of the supported formats automatically. |
This property is read-only.
Data Type
i32
ssl_server_cert_subject_alt_names property (HadoopDFS Struct)
Comma-separated lists of alternative subject names for the certificate.
Syntax
fn ssl_server_cert_subject_alt_names(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
Comma-separated lists of alternative subject names for the certificate.
This property is read-only.
Data Type
String
ssl_server_cert_thumbprint_md5 property (HadoopDFS Struct)
The MD5 hash of the certificate.
Syntax
fn ssl_server_cert_thumbprint_md5(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The MD5 hash of the certificate. It is primarily used for X.509 certificates. If the hash does not already exist, it is automatically computed.
This property is read-only.
Data Type
String
ssl_server_cert_thumbprint_sha1 property (HadoopDFS Struct)
The SHA-1 hash of the certificate.
Syntax
fn ssl_server_cert_thumbprint_sha1(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The SHA-1 hash of the certificate. It is primarily used for X.509 certificates. If the hash does not already exist, it is automatically computed.
This property is read-only.
Data Type
String
ssl_server_cert_thumbprint_sha256 property (HadoopDFS Struct)
The SHA-256 hash of the certificate.
Syntax
fn ssl_server_cert_thumbprint_sha256(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The SHA-256 hash of the certificate. It is primarily used for X.509 certificates. If the hash does not already exist, it is automatically computed.
This property is read-only.
Data Type
String
ssl_server_cert_usage property (HadoopDFS Struct)
The text description of UsageFlags .
Syntax
fn ssl_server_cert_usage(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The text description of ssl_server_cert_usage_flags.
This value will be one or more of the following strings and will be separated by commas:
- Digital Signature
- Non-Repudiation
- Key Encipherment
- Data Encipherment
- Key Agreement
- Certificate Signing
- CRL Signing
- Encipher Only
If the provider is OpenSSL, the value is a comma-separated list of X.509 certificate extension names.
This property is read-only.
Data Type
String
ssl_server_cert_usage_flags property (HadoopDFS Struct)
The flags that show intended use for the certificate.
Syntax
fn ssl_server_cert_usage_flags(&self ) -> Result<i32, CloudFilesError>
Default Value
0
Remarks
The flags that show intended use for the certificate. The value of ssl_server_cert_usage_flags is a combination of the following flags:
| 0x80 | Digital Signature |
| 0x40 | Non-Repudiation |
| 0x20 | Key Encipherment |
| 0x10 | Data Encipherment |
| 0x08 | Key Agreement |
| 0x04 | Certificate Signing |
| 0x02 | CRL Signing |
| 0x01 | Encipher Only |
Please see the ssl_server_cert_usage property for a text representation of ssl_server_cert_usage_flags.
This functionality currently is not available when the provider is OpenSSL.
This property is read-only.
Data Type
i32
ssl_server_cert_version property (HadoopDFS Struct)
The certificate's version number.
Syntax
fn ssl_server_cert_version(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The certificate's version number. The possible values are the strings "V1", "V2", and "V3".
This property is read-only.
Data Type
String
ssl_server_cert_subject property (HadoopDFS Struct)
The subject of the certificate used for client authentication.
Syntax
fn ssl_server_cert_subject(&self ) -> Result<String, CloudFilesError>
Default Value
""
Remarks
The subject of the certificate used for client authentication.
This property must be set after all other certificate properties are set. When this property is set, a search is performed in the current certificate store to locate a certificate with a matching subject.
If a matching certificate is found, the property is set to the full subject of the matching certificate.
If an exact match is not found, the store is searched for subjects containing the value of the property.
If a match is still not found, the property is set to an empty string, and no certificate is selected.
The special value "*" picks a random certificate in the certificate store.
The certificate subject is a comma-separated list of distinguished name fields and values. For instance, "CN=www.server.com, OU=test, C=US, E=example@email.com". Common fields and their meanings are as follows:
| Field | Meaning |
| CN | Common Name. This is commonly a hostname like www.server.com. |
| O | Organization |
| OU | Organizational Unit |
| L | Locality |
| S | State |
| C | Country |
| E | Email Address |
If a field value contains a comma, it must be quoted.
This property is read-only.
Data Type
String
ssl_server_cert_encoded property (HadoopDFS Struct)
The certificate (PEM/Base64 encoded).
Syntax
fn ssl_server_cert_encoded(&self ) -> Result<Vec<u8>, CloudFilesError>
Default Value
""
Remarks
The certificate (PEM/Base64 encoded). This property is used to assign a specific certificate. The ssl_server_cert_store and ssl_server_cert_subject properties also may be used to specify a certificate.
When ssl_server_cert_encoded is set, a search is initiated in the current ssl_server_cert_store for the private key of the certificate. If the key is found, ssl_server_cert_subject is updated to reflect the full subject of the selected certificate; otherwise, ssl_server_cert_subject is set to an empty string.
This property is read-only.
Data Type
Vec
start_byte property (HadoopDFS Struct)
The byte offset from which to start downloading a file.
Syntax
fn start_byte(&self ) -> Result<i64, CloudFilesError>
fn set_start_byte(&self, value : i64) -> Option<CloudFilesError>
Default Value
0
Remarks
This property specifies an offset (in bytes) from which to start reading a file when download_file is called. It can be used in tandem with read_bytes to specify a specific range of the file to download.
Data Type
i64
timeout property (HadoopDFS Struct)
The timeout for the struct.
Syntax
fn timeout(&self ) -> Result<i32, CloudFilesError>
fn set_timeout(&self, value : i32) -> Option<CloudFilesError>
Default Value
60
Remarks
If the timeout property is set to 0, all operations will run uninterrupted until successful completion or an error condition is encountered.
If timeout is set to a positive value, the struct will wait for the operation to complete before returning control.
The struct will use do_events to enter an efficient wait loop during any potential waiting period, making sure that all system events are processed immediately as they arrive. This ensures that the host application does not freeze and remains responsive.
If timeout expires, and the operation is not yet complete, the struct fails with an error.
NOTE: By default, all timeouts are inactivity timeouts, that is, the timeout period is extended by timeout seconds when any amount of data is successfully sent or received.
The default value for the timeout property is 60 seconds.
Data Type
i32
url property (HadoopDFS Struct)
The URL of the Hadoop WebHDFS server.
Syntax
fn url(&self ) -> Result<String, CloudFilesError>
fn set_url(&self, value : &str) -> Option<CloudFilesError> fn set_url_ref(&self, value : &String) -> Option<CloudFilesError>
Default Value
""
Remarks
This property specifies the URL of the Hadoop WebHDFS server to make requests against.
A full WebHDFS URL looks like http[s]://<HOST>:<PORT>/webhdfs/v1. Anytime this property is set to a non-empty string, the struct will automatically append /webhdfs/v1 to its value (if necessary).
The struct automatically detects whether to secure the connection using SSL/TLS based on whether the url begins with http (not secured) or https (secured).
Data Type
String
user property (HadoopDFS Struct)
The user name to use for authentication.
Syntax
fn user(&self ) -> Result<String, CloudFilesError>
fn set_user(&self, value : &str) -> Option<CloudFilesError> fn set_user_ref(&self, value : &String) -> Option<CloudFilesError>
Default Value
""
Remarks
This property specifies the user name to use for authentication.
Refer to auth_mechanism for more information.
Data Type
String
add_query_param method (HadoopDFS Struct)
Adds a query parameter to the QueryParams properties.
Syntax
fn add_query_param(&self, name : &str, value : &str) -> Result<(), CloudFilesError>
Remarks
This method is used to add a query parameter to the query_params properties. Name specifies the name of the parameter, and Value specifies the value of the parameter.
All specified Values will be URL encoded by the struct automatically. Consult the service documentation for details on the available parameters.
append_file method (HadoopDFS Struct)
Appends data to an existing file.
Syntax
fn append_file(&self, file_path : &str) -> Result<(), CloudFilesError>
Remarks
This method appends data to the existing file at FilePath.
If local_file is set data will be read from the file at the specified path. If local_file is not set the data in resource_data will be used.
calc_authorization method (HadoopDFS Struct)
Calculates the Authorization header based on provided credentials.
Syntax
fn calc_authorization(&self) -> Result<(), CloudFilesError>
Remarks
This method calculates the authorization value using the values provided in auth_scheme, user and password.
In most cases this method does not need to be called. The struct will automatically calculate any required authorization values when a method is called such as get or post.
This method may be useful in cases where the authorization value needs to be calculated prior to sending a request.
config method (HadoopDFS Struct)
Sets or retrieves a configuration setting.
Syntax
fn config(&self, configuration_string : &str) -> Result<String, CloudFilesError>
Remarks
config is a generic method available in every struct. It is used to set and retrieve configuration settings for the struct.
These settings are similar in functionality to properties, but they are rarely used. In order to avoid "polluting" the property namespace of the struct, access to these internal properties is provided through the config method.
To set a configuration setting named PROPERTY, you must call Config("PROPERTY=VALUE"), where VALUE is the value of the setting expressed as a string. For boolean values, use the strings "True", "False", "0", "1", "Yes", or "No" (case does not matter).
To read (query) the value of a configuration setting, you must call Config("PROPERTY"). The value will be returned as a string.
delete_resource method (HadoopDFS Struct)
Deletes a resource.
Syntax
fn delete_resource(&self, path : &str) -> Result<(), CloudFilesError>
Remarks
This method deletes the resource at Path.
The RecursiveDelete configuration setting can be used to control whether this method will recursively delete non-empty directories (it is enabled by default).
do_custom_op method (HadoopDFS Struct)
Executes an arbitrary WebHDFS operation.
Syntax
fn do_custom_op(&self, http_method : &str, request_path : &str, op : &str) -> Result<(), CloudFilesError>
Remarks
This method can be used to execute any WebHDFS operation the struct does not explicitly implement.
Valid values for HttpMethod are:
- GET (default if empty)
- POST
- PUT
- DELETE
RequestPath must either be a valid resource path, or empty string. Op must be a valid WebHDFS operation (refer to the Hadoop WebHDFS documentation for a full list of supported operations).
Usage
When this method is called, the struct will do the following:
- Build a request URL using url, RequestPath, Op, and the query parameters held by the query_params properties.
- Send the request using the given HttpMethod, the request URL built in step 1, the headers held by other_headers, and the currently-configured authentication (if any; refer to auth_mechanism for more information). The request is always sent with an empty body.
- Store the response headers in the parsed_headers properties, and the response body in the specified local_file, or resource_data (using the same logic as download_file).
If the response body is JSON data, the XPath, XText, and other X* configuration settings can then be used to navigate and extract information from it.
download_file method (HadoopDFS Struct)
Downloads a file.
Syntax
fn download_file(&self, file_path : &str) -> Result<(), CloudFilesError>
Remarks
This method downloads the file at FilePath.
If local_file is set, the file will be saved to the specified location; otherwise, the file data will be held by resource_data.
To download and decrypt an encrypted file, set encryption_algorithm and encryption_password before calling this method.
Download Notes
In the simplest use-case, downloading a file looks like this:
hdfs.LocalFile = "../MyFile.zip";
hdfs.DownloadFile(hdfs.Resources[0].Path);
Resuming Downloads
The struct also supports resuming failed downloads by using the start_byte property. If a download is interrupted, set start_byte to the appropriate offset before calling this method to resume the download.
string downloadFile = "../MyFile.zip";
hdfs.LocalFile = downloadFile;
hdfs.DownloadFile(hdfs.Resources[0].Path);
//The transfer is interrupted and DownloadFile() above fails. Later, resume the download:
//Get the size of the partially downloaded file
hdfs.StartByte = new FileInfo(downloadFile).Length;
hdfs.DownloadFile(hdfs.Resources[0].Path);
Resuming Encrypted File Downloads
Resuming encrypted file downloads is only supported when local_file was set in the initial download attempt.
If local_file is set when beginning an encrypted download, the struct creates a temporary file in TempPath to hold the encrypted data until the download is complete. If the download is interrupted, DownloadTempFile will be populated with the path of the temporary file that holds the partial data.
To resume, DownloadTempFile must be populated, along with start_byte, to allow the remainder of the encrypted data to be downloaded. Once the encrypted data is downloaded it will be decrypted and written to local_file.
hdfs.LocalFile = "../MyFile.zip";
hdfs.EncryptionPassword = "password";
hdfs.DownloadFile(hdfs.Resources[0].Path);
//The transfer is interrupted and DownloadFile() above fails. Later, resume the download:
//Get the size of the partially download temp file
hdfs.StartByte = new FileInfo(hdfs.Config("DownloadTempFile")).Length;
hdfs.DownloadFile(hdfs.Resources[0].Path);
get_dir_summary method (HadoopDFS Struct)
Gets a content summary for a directory.
Syntax
fn get_dir_summary(&self, dir_path : &str) -> Result<(), CloudFilesError>
Remarks
This method gets a content summary for the directory at DirPath, populating the dir_summary object property's properties.
DirPath may be empty to get a content summary for the root directory (/).
get_resource_info method (HadoopDFS Struct)
Gets information about a specific resource.
Syntax
fn get_resource_info(&self, path : &str) -> Result<(), CloudFilesError>
Remarks
This method gets information about the resource at Path. Path may be empty to get information about the root directory (/).
Calling this method will fire the on_resource_list event, and will populate the resources properties with a single item (clearing any previously-held items in the process).
interrupt method (HadoopDFS Struct)
This method interrupts the current method.
Syntax
fn interrupt(&self) -> Result<(), CloudFilesError>
Remarks
If there is no method in progress, interrupt simply returns, doing nothing.
join_file_blocks method (HadoopDFS Struct)
Joins multiple files' blocks together into one file.
Syntax
fn join_file_blocks(&self, target_file_path : &str, source_file_paths : &str) -> Result<(), CloudFilesError>
Remarks
This method joins the blocks from the files at one or more SourceFilePaths onto the end of the file at TargetFilePath. (If this operations is successful, the original source files will no longer be accessible.)
The order of the source files' paths in SourceFilePaths determines the order in which their blocks are joined onto the file at TargetFilePath.
Usage Constraints
The server imposes a number of constraints that must be satisfied in order for the join operation to complete successfully.
TargetFilePath and SourceFilePaths must both be non-empty.
SourceFilePaths must be specified as a comma-separated list of source file paths, with no duplicates or paths that match TargetFilePath.
Additionally, all files referred to by both TargetFilePath and SourceFilePaths must satisfy the following constraints:
- All files must already exist.
- All files must be located in the exact same directory (i.e., all files must be siblings).
- The block size of all source files must be less than or equal to the block size of the target file.
If any of the above constraints are not satisfied, the server will return an error.
The struct will take care of verifying that all parameters are non-empty and that all paths begin with a forward slash (/).
list_resources method (HadoopDFS Struct)
Lists resources in a given directory.
Syntax
fn list_resources(&self, dir_path : &str) -> Result<(), CloudFilesError>
Remarks
This method lists resources within the directory at DirPath.
Calling this method will fire the on_resource_list event once for each resource, and will also populate the resources properties.
DirPath may be empty to list resources in the root directory (/).
// ResourceList event handler.
hdfs.OnResourceList += (s, e) => {
Console.WriteLine(e.Name);
};
hdfs.ListResources("/work_files/serious_business/cats");
for (int i = 0; i < hdfs.Resources.Count; i++) {
// Process resources here.
}
make_directory method (HadoopDFS Struct)
Makes a directory.
Syntax
fn make_directory(&self, new_dir_path : &str) -> Result<String, CloudFilesError>
Remarks
This method makes a new directory at NewDirPath and returns the full path of the new directory. Any non-existent parent directories will also be created.
If the CreatePermission configuration setting is non-empty, the directory will be created with the permission it specifies. Otherwise, the server's default (755) will be used.
move_resource method (HadoopDFS Struct)
Moves a resource.
Syntax
fn move_resource(&self, from_path : &str, to_path : &str) -> Result<(), CloudFilesError>
Remarks
This method moves the resource at FromPath to ToPath.
reset method (HadoopDFS Struct)
Resets the struct to its initial state.
Syntax
fn reset(&self) -> Result<(), CloudFilesError>
Remarks
This method resets the struct to its initial state.
set_file_replication method (HadoopDFS Struct)
Sets the replication factor for a file.
Syntax
fn set_file_replication(&self, file_path : &str, replication : i32) -> Result<(), CloudFilesError>
Remarks
This method sets the replication factor for the file at FilePath to Replication.
Valid values for Replication are 1 through 32767 (inclusive); or 0 to reset the file's replication factor back to the server default.
set_owner method (HadoopDFS Struct)
Sets a resource's owner and/or group.
Syntax
fn set_owner(&self, path : &str, owner : &str, group : &str) -> Result<(), CloudFilesError>
Remarks
This method set the owner and/or group of the resource at Path to the given values.
If non-empty, Owner must be a valid user name, and Group must be a valid group name. If either parameter is empty, the file's current value will remain unchanged. (If both parameters are empty, no request is sent.)
set_permission method (HadoopDFS Struct)
Assigns the given permission to a resource.
Syntax
fn set_permission(&self, path : &str, permission : &str) -> Result<(), CloudFilesError>
Remarks
This method assigns the given Permission to the resource at Path.
Permission must be formatted as an octal permission string. If Permission is empty string, the resource's permission will be reset to the server default (755).
set_times method (HadoopDFS Struct)
Sets a resource's modification and/or access times.
Syntax
fn set_times(&self, path : &str, modified_time : i64, access_time : i64) -> Result<(), CloudFilesError>
Remarks
This method sets the modification and/or access times of the resource at Path to the given values.
Both ModifiedTime and AccessTime should be specified as a number of milliseconds relative to the Unix epoch. If either parameter is negative, the file's current value will remain unchanged. (If both parameters are negative, no request is sent.)
truncate_file method (HadoopDFS Struct)
Truncates a file to a given size.
Syntax
fn truncate_file(&self, file_path : &str, new_size : i64) -> Result<bool, CloudFilesError>
Remarks
This method truncates the file at FilePath to the given NewSize (specified in bytes). NewSize must not be less than 0 or greater than the file's current size.
This method returns true if the file has been truncated successfully and is immediately ready for further modifications.
This method returns false if the server is still in the process of truncating the file (this could happen, e.g., if the server needs to truncate multiple replicas of the file). In this case, the server will reject any further attempts to modify the file until it has finished truncating it. Monitor the file's size using get_resource_info to determine when the truncation process has finished.
upload_file method (HadoopDFS Struct)
Uploads a file.
Syntax
fn upload_file(&self, new_file_path : &str) -> Result<String, CloudFilesError>
Remarks
This method uploads a new file to NewFilePath and returns the full path of the uploaded file. If a file already exists on the server at NewFilePath, the overwrite property controls whether the server will overwrite the file or return an error.
If the CreatePermission configuration setting is non-empty, the file will be created with the permission it specifies. Otherwise, the server's default (755) will be used.
If local_file is set the file will be uploaded from the specified path. If local_file is not set the data in resource_data will be used.
To encrypt the file before uploading it, set encryption_algorithm and encryption_password.
hdfs.LocalFile = "../MyFile.zip";
hdfs.UploadFile("/MyFile.zip");
on_end_transfer event (HadoopDFS Struct)
This event fires when a document finishes transferring.
Syntax
// HadoopDFSEndTransferEventArgs carries the HadoopDFS EndTransfer event's parameters.
pub struct HadoopDFSEndTransferEventArgs {
fn direction(&self) -> i32
}
// HadoopDFSEndTransferEvent defines the signature of the HadoopDFS EndTransfer event's handler function.
pub trait HadoopDFSEndTransferEvent {
fn on_end_transfer(&self, sender : HadoopDFS, e : &mut HadoopDFSEndTransferEventArgs);
}
impl <'a> HadoopDFS<'a> {
pub fn on_end_transfer(&self) -> &'a dyn HadoopDFSEndTransferEvent;
pub fn set_on_end_transfer(&mut self, value : &'a dyn HadoopDFSEndTransferEvent);
...
}
Remarks
The on_end_transfer event is fired when the document text finishes transferring from the server to the local host.
The Direction parameter shows whether the client (0) or the server (1) is sending the data.
on_error event (HadoopDFS Struct)
Fired when information is available about errors during data delivery.
Syntax
// HadoopDFSErrorEventArgs carries the HadoopDFS Error event's parameters.
pub struct HadoopDFSErrorEventArgs {
fn error_code(&self) -> i32
fn description(&self) -> &String
}
// HadoopDFSErrorEvent defines the signature of the HadoopDFS Error event's handler function.
pub trait HadoopDFSErrorEvent {
fn on_error(&self, sender : HadoopDFS, e : &mut HadoopDFSErrorEventArgs);
}
impl <'a> HadoopDFS<'a> {
pub fn on_error(&self) -> &'a dyn HadoopDFSErrorEvent;
pub fn set_on_error(&mut self, value : &'a dyn HadoopDFSErrorEvent);
...
}
Remarks
The on_error event is fired in case of exceptional conditions during message processing. Normally the struct fails with an error.
The ErrorCode parameter contains an error code, and the Description parameter contains a textual description of the error. For a list of valid error codes and their descriptions, please refer to the Error Codes section.
on_header event (HadoopDFS Struct)
Fired every time a header line comes in.
Syntax
// HadoopDFSHeaderEventArgs carries the HadoopDFS Header event's parameters.
pub struct HadoopDFSHeaderEventArgs {
fn field(&self) -> &String
fn value(&self) -> &String
}
// HadoopDFSHeaderEvent defines the signature of the HadoopDFS Header event's handler function.
pub trait HadoopDFSHeaderEvent {
fn on_header(&self, sender : HadoopDFS, e : &mut HadoopDFSHeaderEventArgs);
}
impl <'a> HadoopDFS<'a> {
pub fn on_header(&self) -> &'a dyn HadoopDFSHeaderEvent;
pub fn set_on_header(&mut self, value : &'a dyn HadoopDFSHeaderEvent);
...
}
Remarks
The Field parameter contains the name of the HTTP header (which is the same as it is delivered). The Value parameter contains the header contents.
If the header line being retrieved is a continuation header line, then the Field parameter contains "" (empty string).
on_log event (HadoopDFS Struct)
Fired once for each log message.
Syntax
// HadoopDFSLogEventArgs carries the HadoopDFS Log event's parameters.
pub struct HadoopDFSLogEventArgs {
fn log_level(&self) -> i32
fn message(&self) -> &String
fn log_type(&self) -> &String
}
// HadoopDFSLogEvent defines the signature of the HadoopDFS Log event's handler function.
pub trait HadoopDFSLogEvent {
fn on_log(&self, sender : HadoopDFS, e : &mut HadoopDFSLogEventArgs);
}
impl <'a> HadoopDFS<'a> {
pub fn on_log(&self) -> &'a dyn HadoopDFSLogEvent;
pub fn set_on_log(&mut self, value : &'a dyn HadoopDFSLogEvent);
...
}
Remarks
This event is fired once for each log message generated by the struct. The verbosity is controlled by the LogLevel setting.
LogLevel indicates the level of message. Possible values are as follows:
| 0 (None) | No events are logged. |
| 1 (Info - default) | Informational events are logged. |
| 2 (Verbose) | Detailed data are logged. |
| 3 (Debug) | Debug data are logged. |
The value 1 (Info) logs basic information, including the URL, HTTP version, and status details.
The value 2 (Verbose) logs additional information about the request and response.
The value 3 (Debug) logs the headers and body for both the request and response, as well as additional debug information (if any).
Message is the log entry.
LogType identifies the type of log entry. Possible values are as follows:
- "Info"
- "RequestHeaders"
- "ResponseHeaders"
- "RequestBody"
- "ResponseBody"
- "ProxyRequest"
- "ProxyResponse"
- "FirewallRequest"
- "FirewallResponse"
on_progress event (HadoopDFS Struct)
Fires during an upload or download to indicate transfer progress.
Syntax
// HadoopDFSProgressEventArgs carries the HadoopDFS Progress event's parameters.
pub struct HadoopDFSProgressEventArgs {
fn direction(&self) -> i32
fn bytes_transferred(&self) -> i64
fn total_bytes(&self) -> i64
fn percent_done(&self) -> i32
}
// HadoopDFSProgressEvent defines the signature of the HadoopDFS Progress event's handler function.
pub trait HadoopDFSProgressEvent {
fn on_progress(&self, sender : HadoopDFS, e : &mut HadoopDFSProgressEventArgs);
}
impl <'a> HadoopDFS<'a> {
pub fn on_progress(&self) -> &'a dyn HadoopDFSProgressEvent;
pub fn set_on_progress(&mut self, value : &'a dyn HadoopDFSProgressEvent);
...
}
Remarks
This event fires during an upload or download to indicate the progress of the transfer of the entire request. By default, this event will fire each time PercentDone increases by one percent; the ProgressStep configuration setting can be used to alter this behavior.
Direction indicates whether the transfer is an upload (0) or a download (1).
BytesTransferred reflects the number of bytes that have been transferred so far, or 0 if the transfer is starting (however, see note below).
TotalBytes reflects the total number of bytes that are to be transferred, or -1 if the total is unknown. This amount includes the size of everything in the request like HTTP headers.
PercentDone reflects the overall progress of the transfer, or -1 if the progress cannot be calculated.
NOTE: By default, the struct tracks transfer progress absolutely. If a transfer is interrupted and later resumed, the values reported by this event upon and after resumption will account for the data that was transferred before the interruption.
For example, if 10MB of data was successfully transferred before the interruption, then this event will fire with a BytesTransferred value of 10485760 (10MB) when the transfer is first resumed, and then continue to fire with successively greater values as usual.
This behavior can be changed by disabling the ProgressAbsolute configuration setting, in which case the struct will treat resumed transfers as "new" transfers. In this case, the BytesTransferred parameter will always be 0 the first time this event fires, regardless of whether the transfer is new or being resumed.
on_resource_list event (HadoopDFS Struct)
Fires once for each resource returned when listing resources.
Syntax
// HadoopDFSResourceListEventArgs carries the HadoopDFS ResourceList event's parameters.
pub struct HadoopDFSResourceListEventArgs {
fn name(&self) -> &String
fn path(&self) -> &String
fn resource_type(&self) -> i32
fn modified_time(&self) -> i64
fn access_time(&self) -> i64
fn size(&self) -> i64
fn permission(&self) -> &String
fn owner(&self) -> &String
fn group(&self) -> &String
fn replication(&self) -> i32
}
// HadoopDFSResourceListEvent defines the signature of the HadoopDFS ResourceList event's handler function.
pub trait HadoopDFSResourceListEvent {
fn on_resource_list(&self, sender : HadoopDFS, e : &mut HadoopDFSResourceListEventArgs);
}
impl <'a> HadoopDFS<'a> {
pub fn on_resource_list(&self) -> &'a dyn HadoopDFSResourceListEvent;
pub fn set_on_resource_list(&mut self, value : &'a dyn HadoopDFSResourceListEvent);
...
}
Remarks
This event fires once for each resource returned when list_resources or get_resource_info is called.
Name is the name of the resource.
Path is the full path of the resource.
ResourceType reflects the resource's type. Possible values are:
| 0 (hrtFile) | A file. |
| 1 (hrtDirectory) | A directory. |
| 2 (hrtSymLink) | A symlink. |
ModifiedTime and AccessTime reflect the resource's last modified and last access times, in milliseconds relative to the Unix epoch.
Size reflects the size of the file, in bytes. Always 0 for directories.
Permission reflects the resource's permission bits, represented as an octal string (e.g., 755).
Owner is the name of the resource's owner.
Group is the name of the resource's group.
Replication reflects the file's replication factor. Always 0 for directories.
on_ssl_server_authentication event (HadoopDFS Struct)
Fired after the server presents its certificate to the client.
Syntax
// HadoopDFSSSLServerAuthenticationEventArgs carries the HadoopDFS SSLServerAuthentication event's parameters.
pub struct HadoopDFSSSLServerAuthenticationEventArgs {
fn cert_encoded(&self) -> &[u8]
fn cert_subject(&self) -> &String
fn cert_issuer(&self) -> &String
fn status(&self) -> &String
fn accept(&self) -> bool
fn set_accept(&self, value : bool)
}
// HadoopDFSSSLServerAuthenticationEvent defines the signature of the HadoopDFS SSLServerAuthentication event's handler function.
pub trait HadoopDFSSSLServerAuthenticationEvent {
fn on_ssl_server_authentication(&self, sender : HadoopDFS, e : &mut HadoopDFSSSLServerAuthenticationEventArgs);
}
impl <'a> HadoopDFS<'a> {
pub fn on_ssl_server_authentication(&self) -> &'a dyn HadoopDFSSSLServerAuthenticationEvent;
pub fn set_on_ssl_server_authentication(&mut self, value : &'a dyn HadoopDFSSSLServerAuthenticationEvent);
...
}
Remarks
During this event, the client can decide whether or not to continue with the connection process. The Accept parameter is a recommendation on whether to continue or close the connection. This is just a suggestion: application software must use its own logic to determine whether or not to continue.
When Accept is False, Status shows why the verification failed (otherwise, Status contains the string OK). If it is decided to continue, you can override and accept the certificate by setting the Accept parameter to True.
on_ssl_status event (HadoopDFS Struct)
Fired when secure connection progress messages are available.
Syntax
// HadoopDFSSSLStatusEventArgs carries the HadoopDFS SSLStatus event's parameters.
pub struct HadoopDFSSSLStatusEventArgs {
fn message(&self) -> &String
}
// HadoopDFSSSLStatusEvent defines the signature of the HadoopDFS SSLStatus event's handler function.
pub trait HadoopDFSSSLStatusEvent {
fn on_ssl_status(&self, sender : HadoopDFS, e : &mut HadoopDFSSSLStatusEventArgs);
}
impl <'a> HadoopDFS<'a> {
pub fn on_ssl_status(&self) -> &'a dyn HadoopDFSSSLStatusEvent;
pub fn set_on_ssl_status(&mut self, value : &'a dyn HadoopDFSSSLStatusEvent);
...
}
Remarks
The event is fired for informational and logging purposes only. This event tracks the progress of the connection.
on_start_transfer event (HadoopDFS Struct)
This event fires when a document starts transferring (after the headers).
Syntax
// HadoopDFSStartTransferEventArgs carries the HadoopDFS StartTransfer event's parameters.
pub struct HadoopDFSStartTransferEventArgs {
fn direction(&self) -> i32
}
// HadoopDFSStartTransferEvent defines the signature of the HadoopDFS StartTransfer event's handler function.
pub trait HadoopDFSStartTransferEvent {
fn on_start_transfer(&self, sender : HadoopDFS, e : &mut HadoopDFSStartTransferEventArgs);
}
impl <'a> HadoopDFS<'a> {
pub fn on_start_transfer(&self) -> &'a dyn HadoopDFSStartTransferEvent;
pub fn set_on_start_transfer(&mut self, value : &'a dyn HadoopDFSStartTransferEvent);
...
}
Remarks
The on_start_transfer event is fired when the document text starts transferring from the server to the local host.
The Direction parameter shows whether the client (0) or the server (1) is sending the data.
on_transfer event (HadoopDFS Struct)
Fired while a document transfers (delivers document).
Syntax
// HadoopDFSTransferEventArgs carries the HadoopDFS Transfer event's parameters.
pub struct HadoopDFSTransferEventArgs {
fn direction(&self) -> i32
fn bytes_transferred(&self) -> i64
fn percent_done(&self) -> i32
fn text(&self) -> &[u8]
}
// HadoopDFSTransferEvent defines the signature of the HadoopDFS Transfer event's handler function.
pub trait HadoopDFSTransferEvent {
fn on_transfer(&self, sender : HadoopDFS, e : &mut HadoopDFSTransferEventArgs);
}
impl <'a> HadoopDFS<'a> {
pub fn on_transfer(&self) -> &'a dyn HadoopDFSTransferEvent;
pub fn set_on_transfer(&mut self, value : &'a dyn HadoopDFSTransferEvent);
...
}
Remarks
The Text parameter contains the portion of the document text being received. It is empty if data are being posted to the server.
The BytesTransferred parameter contains the number of bytes transferred in this Direction since the beginning of the document text (excluding HTTP response headers).
The Direction parameter shows whether the client (0) or the server (1) is sending the data.
The PercentDone parameter shows the progress of the transfer in the corresponding direction. If PercentDone can not be calculated the value will be -1.
NOTE: Events are not re-entrant. Performing time-consuming operations within this event will prevent it from firing again in a timely manner and may affect overall performance.
Config Settings (HadoopDFS Struct)
The struct accepts one or more of the following configuration settings. Configuration settings are similar in functionality to properties, but they are rarely used. In order to avoid "polluting" the property namespace of the struct, access to these internal properties is provided through the config method.HadoopDFS Config Settings
This setting must be formatted as an octal string; it is empty by default, causing the server's default (755) to be used.
When downloading encrypted data with local_file set, the struct will automatically create a temporary file at TempPath to hold the encrypted file contents. When the download is complete, the data is decrypted to local_file.
If the download is interrupted, the specified file will hold the partially downloaded encrypted file contents. Before resuming the download, this setting must be set to a valid file containing the partially encrypted file contents. See download_file for details.
This setting accepts a hex encoded value.
This setting accepts a hex encoded value.
- 0 (default) - PBKDF1
- 1 - PBKDF2
If this setting is enabled (default), then when a transfer is interrupted and later resumed, the values reported by the on_progress event will account for the data that was successfully transferred before the interruption.
If this setting is disabled, then the struct will treat resumed transfers as "new" transfers, and the values reported by the on_progress event will start at 0 rather than from the number of bytes already transferred.
Refer to the on_progress event for more information.
The default value, 1, will cause the on_progress event to fire each time the event's PercentDone parameter value increases by one percent. Setting this setting to 0 will cause the on_progress event to fire every time data is transferred.
Note that the on_progress event will always fire once at the beginning and end of a transfer, regardless of this setting's value. Also, if PercentDone cannot be calculated for a particular transfer (e.g., for downloads that use chunked transfer encoding), then the struct will behave as if this setting were 0 for the duration of the transfer.
By default this setting is enabled. If this setting is disabled, non-empty directories must be emptied before they can be deleted.
The current element is specified through the XPath configuration setting. This configuration setting is read-only.
The current element is specified through the XPath configuration setting. This configuration setting is read-only.
The current element is specified through the XPath configuration setting. This configuration setting is read-only.
The current element is specified through the XPath configuration setting. This configuration setting is read-only.
The current element is specified through the XPath configuration setting. This configuration setting is read-only.
When XPath is set to a valid path, XElement points to the name of the element, with XText, XParent, XSubTree, XChildCount, XChildName[i], and XChildXText[i] providing other properties of the element.
XPath syntax is available for both XML and JSON documents. An XPath is a series of one or more element accessors separated by the / character, for example, /A/B/C/D. An XPath can be absolute (i.e., it starts with /), or it can be relative to the current XPath location.
The following are possible values for an element accessor, which operates relative to the current location specified by the XPath accessors, which proceed it in the overall XPath string:
| Accessor | Description |
| name | The first element with a particular name. Can be *. |
| [i] | The i-th element. |
| name[i] | The i-th element with a particular name. |
| [last()] | The last element. |
| [last()-i] | The element i before the last element. |
| name[@attrname="attrvalue"] | The first element with a particular name that contains the specified attribute-value pair.
Supports single and double quotes. (XML Only) |
| . | The current element. |
| .. | The parent element. |
For example, assume the following XML and JSON responses.
XML:
<firstlevel>
<one>value</one>
<two>
<item>first</item>
<item>second</item>
</two>
<three>value three</three>
</firstlevel>
JSON:
{
"firstlevel": {
"one": "value",
"two": ["first", "second"],
"three": "value three"
}
}
The following are examples of valid XPaths for these responses:
| Description | XML XPath | JSON XPath |
| Document root | / | /json |
| Specific element | /firstlevel/one | /json/firstlevel/one |
| i-th child | /firstlevel/two/item[2] | /json/firstlevel/two/[2] |
This list is not exhaustive, but it provides a general idea of the possibilities.
The current element is specified through the XPath configuration setting. This configuration setting is read-only.
The current element is specified in the XPath configuration setting. This configuration setting is read-only.
HTTP Config Settings
When True, the struct adds an Accept-Encoding header to the outgoing request. The value for this header can be controlled by the AcceptEncoding configuration setting. The default value for this header is "gzip, deflate".
The default value is True.
If set to True (default), the struct will automatically use HTTP/1.1 if the server does not support HTTP/2. If set to False, the struct fails with an error if the server does not support HTTP/2.
The default value is True.
This property is provided so that the HTTP struct can be extended with other security schemes in addition to the authorization schemes already implemented by the struct.
The auth_scheme property defines the authentication scheme used. In the case of HTTP Basic Authentication (default), every time user and password are set, they are Base64 encoded, and the result is put in the authorization property in the form "Basic [encoded-user-password]".
The default value is False.
If this property is set to 2 (Same Scheme), the new url is retrieved automatically only if the URL Scheme is the same; otherwise, the struct fails with an error.
Note: Following the HTTP specification, unless this option is set to 1 (Always), automatic redirects will be performed only for GET or HEAD requests. Other methods potentially could change the conditions of the initial request and create security vulnerabilities.
Furthermore, if either the new URL server or port are different from the existing one, user and password are also reset to empty, unless this property is set to 1 (Always), in which case the same credentials are used to connect to the new server.
A on_redirect event is fired for every URL the product is redirected to. In the case of automatic redirections, the on_redirect event is a good place to set properties related to the new connection (e.g., new authentication parameters).
The default value is 0 (Never). In this case, redirects are never followed, and the struct fails with an error instead.
Following are the valid options:
- 0 - Never
- 1 - Always
- 2 - Same Scheme
- "1.0"
- "1.1" (default)
- "2.0"
- "3.0"
When using HTTP/2 ("2.0") or HTTP/3 ("3.0"), additional restrictions apply. Please see the following notes for details.
HTTP/2 Notes
When using HTTP/2, a secure Secure Sockets Layer/Transport Layer Security (TLS/SSL) connection is required. Attempting to use a plaintext URL with HTTP/2 will result in an error.
If the server does not support HTTP/2, the struct will automatically use HTTP/1.1 instead. This is done to provide compatibility without the need for any additional settings. To see which version was used, check NegotiatedHTTPVersion after calling a method. The AllowHTTPFallback setting controls whether this behavior is allowed (default) or disallowed.
HTTP/3 Notes
HTTP/3 is supported only in .NET and Java.
When using HTTP/3, a secure (TLS/SSL) connection is required. Attempting to use a plaintext URL with HTTP/3 will result in an error.
The format of the date value for IfModifiedSince is detailed in the HTTP specs. For example:
Sat, 29 Oct 2017 19:43:31 GMT.
The default value for KeepAlive is false.
| 0 (None) | No events are logged. |
| 1 (Info - default) | Informational events are logged. |
| 2 (Verbose) | Detailed data are logged. |
| 3 (Debug) | Debug data are logged. |
The value 1 (Info) logs basic information, including the URL, HTTP version, and status details.
The value 2 (Verbose) logs additional information about the request and response.
The value 3 (Debug) logs the headers and body for both the request and response, as well as additional debug information (if any).
The headers must follow the format "header: value" as described in the HTTP specifications. Header lines should be separated by CRLF ("\r\n") .
Use this configuration setting with caution. If this configuration setting contains invalid headers, HTTP requests may fail.
This configuration setting is useful for extending the functionality of the struct beyond what is provided.
.NET
Http http = new Http();
http.Config("TransferredRequest=on");
http.PostData = "body";
http.Post("http://someserver.com");
Console.WriteLine(http.Config("TransferredRequest"));
C++
HTTP http;
http.Config("TransferredRequest=on");
http.SetPostData("body", 5);
http.Post("http://someserver.com");
printf("%s\r\n", http.Config("TransferredRequest"));
NOTE: Some servers (such as the ASP.NET Development Server) may not support chunked encoding.
The default value is False and the hostname will always be used exactly as specified. NOTE: The CodePage setting must be set to a value capable of interpreting the specified host name. For instance, to specify UTF-8, set CodePage to 65001. In the C++ Edition for Windows, the *W version of the class must be used. For instance, DNSW or HTTPW.
When True (default), the struct will check for the existence of a Proxy auto-config URL, and if found, will determine the appropriate proxy to use.
Override the default with the name and version of your software.
TCPClient Config Settings
If the FirewallHost setting is set to a Domain Name, a DNS request is initiated. Upon successful termination of the request, the FirewallHost setting is set to the corresponding address. If the search is not successful, an error is returned.
NOTE: This setting is provided for use by structs that do not directly expose Firewall properties.
NOTE: This setting is provided for use by structs that do not directly expose Firewall properties.
NOTE: This configuration setting is provided for use by structs that do not directly expose Firewall properties.
| 0 | No firewall (default setting). |
| 1 | Connect through a tunneling proxy. FirewallPort is set to 80. |
| 2 | Connect through a SOCKS4 Proxy. FirewallPort is set to 1080. |
| 3 | Connect through a SOCKS5 Proxy. FirewallPort is set to 1080. |
| 10 | Connect through a SOCKS4A Proxy. FirewallPort is set to 1080. |
NOTE: This setting is provided for use by structs that do not directly expose Firewall properties.
NOTE: This setting is provided for use by structs that do not directly expose Firewall properties.
NOTE: This value is not applicable in macOS.
In the case that Linger is True (default), two scenarios determine how long the connection will linger. In the first, if LingerTime is 0 (default), the system will attempt to send pending data for a connection until the default IP timeout expires.
In the second scenario, if LingerTime is a positive value, the system will attempt to send pending data until the specified LingerTime is reached. If this attempt fails, then the system will reset the connection.
The default behavior (which is also the default mode for stream sockets) might result in a long delay in closing the connection. Although the struct returns control immediately, the system could hold system resources until all pending data are sent (even after your application closes).
Setting this property to False forces an immediate disconnection. If you know that the other side has received all the data you sent (e.g., by a client acknowledgment), setting this property to False might be the appropriate course of action.
In multihomed hosts (machines with more than one IP interface), setting LocalHost to the value of an interface will make the struct initiate connections (or accept in the case of server structs) only through that interface.
If the struct is connected, the local_host setting shows the IP address of the interface through which the connection is made in internet dotted format (aaa.bbb.ccc.ddd). In most cases, this is the address of the local host, except for multihomed hosts (machines with more than one IP interface).
Setting this to 0 (default) enables the system to choose a port at random. The chosen port will be shown by local_port after the connection is established.
local_port cannot be changed once a connection is made. Any attempt to set this when a connection is active will generate an error.
This configuration setting is useful when trying to connect to services that require a trusted port on the client side. An example is the remote shell (rsh) service in UNIX systems.
If an eol string is found in the input stream before MaxLineLength bytes are received, the on_data_in event is fired with the EOL parameter set to True, and the buffer is reset.
If no eol is found, and MaxLineLength bytes are accumulated in the buffer, the on_data_in event is fired with the EOL parameter set to False, and the buffer is reset.
The minimum value for MaxLineLength is 256 bytes. The default value is 2048 bytes.
www.google.com;www.example.com
NOTE: This value is not applicable in Java.
By default, this configuration setting is set to False.
| 0 | IPv4 only |
| 1 | IPv6 only |
| 2 | IPv6 with IPv4 fallback |
SSL Config Settings
When enabled, SSL packet logs are output using the on_ssl_status event, which will fire each time an SSL packet is sent or received.
Enabling this configuration setting has no effect if ssl_provider is set to Platform.
The path set by this property should point to a directory containing CA certificates in PEM format. The files each contain one CA certificate. The files are looked up by the CA subject name hash value, which must hence be available. If more than one CA certificate with the same name hash value exist, the extension must be different (e.g., 9d66eef0.0, 9d66eef0.1). OpenSSL recommends the use of the c_rehash utility to create the necessary links. Please refer to the OpenSSL man page SSL_CTX_load_verify_locations(3) for details.
The file set by this property should contain a list of CA certificates in PEM format. The file can contain several CA certificates identified by the following sequences:
-----BEGIN CERTIFICATE-----
... (CA certificate in base64 encoding) ...
-----END CERTIFICATE-----
Before, between, and after the certificate text is allowed, which can be used, for example, for descriptions of the certificates. Refer to the OpenSSL man page SSL_CTX_load_verify_locations(3) for details.
The format of this string is described in the OpenSSL man page ciphers(1) section "CIPHER LIST FORMAT". Please refer to it for details. The default string "DEFAULT" is determined at compile time and is normally equivalent to "ALL:!ADH:RC4+RSA:+SSLv2:@STRENGTH".
By default, OpenSSL uses the device file "/dev/urandom" to seed the PRNG, and setting OpenSSLPrngSeedData is not required. If set, the string specified is used to seed the PRNG.
If set to True, the struct will reuse the context if and only if the following criteria are met:
- The target host name is the same.
- The system cache entry has not expired (default timeout is 10 hours).
- The application process that calls the function is the same.
- The logon session is the same.
- The instance of the struct is the same.
-----BEGIN CERTIFICATE----- MIIEKzCCAxOgAwIBAgIRANTET4LIkxdH6P+CFIiHvTowDQYJKoZIhvcNAQELBQAw ... Intermediate Cert ... eWHV5OW1K53o/atv59sOiW5K3crjFhsBOd5Q+cJJnU+SWinPKtANXMht+EDvYY2w F0I1XhM+pKj7FjDr+XNj -----END CERTIFICATE----- \r \n -----BEGIN CERTIFICATE----- MIIEFjCCAv6gAwIBAgIQetu1SMxpnENAnnOz1P+PtTANBgkqhkiG9w0BAQUFADBp ... Root Cert ... d8q23djXZbVYiIfE9ebr4g3152BlVCHZ2GyPdjhIuLeH21VbT/dyEHHA -----END CERTIFICATE-----
When set to 0 (default), the CRL check will not be performed by the struct. When set to 1, it will attempt to perform the CRL check, but it will continue without an error if the server's certificate does not support CRL. When set to 2, it will perform the CRL check and will throw an error if CRL is not supported.
This configuration setting is supported only in the Java, C#, and C++ editions. In the C++ edition, it is supported only on Windows operating systems.
When set to 0 (default), the struct will not perform an OCSP check. When set to 1, it will attempt to perform the OCSP check, but it will continue without an error if the server's certificate does not support OCSP. When set to 2, it will perform the OCSP check and will throw an error if OCSP is not supported.
This configuration setting is supported only in the Java, C#, and C++ editions. In the C++ edition, it is supported only on Windows operating systems.
NOTE: This configuration setting contains the minimum cipher strength requested from the security library. The actual cipher strength used for the connection is shown by the on_ssl_status event.
Use this configuration setting with caution. Requesting a lower cipher strength than necessary could potentially cause serious security vulnerabilities in your application.
When the provider is OpenSSL, SSLCipherStrength is currently not supported. This functionality is instead made available through the OpenSSLCipherList configuration setting.
The value of this configuration setting is a newline-separated (CR/LF) list of certificates. For instance:
-----BEGIN CERTIFICATE----- MIIEKzCCAxOgAwIBAgIRANTET4LIkxdH6P+CFIiHvTowDQYJKoZIhvcNAQELBQAw ... Intermediate Cert ... eWHV5OW1K53o/atv59sOiW5K3crjFhsBOd5Q+cJJnU+SWinPKtANXMht+EDvYY2w F0I1XhM+pKj7FjDr+XNj -----END CERTIFICATE----- \r \n -----BEGIN CERTIFICATE----- MIIEFjCCAv6gAwIBAgIQetu1SMxpnENAnnOz1P+PtTANBgkqhkiG9w0BAQUFADBp ... Root Cert ... d8q23djXZbVYiIfE9ebr4g3152BlVCHZ2GyPdjhIuLeH21VbT/dyEHHA -----END CERTIFICATE-----
By default, the enabled cipher suites will include all available ciphers ("*").
The special value "*" means that the struct will pick all of the supported cipher suites. If SSLEnabledCipherSuites is set to any other value, only the specified cipher suites will be considered.
Multiple cipher suites are separated by semicolons.
Example values when ssl_provider is set to Platform include the following:
obj.config("SSLEnabledCipherSuites=*");
obj.config("SSLEnabledCipherSuites=CALG_AES_256");
obj.config("SSLEnabledCipherSuites=CALG_AES_256;CALG_3DES");
Possible values when ssl_provider is set to Platform include the following:
- CALG_3DES
- CALG_3DES_112
- CALG_AES
- CALG_AES_128
- CALG_AES_192
- CALG_AES_256
- CALG_AGREEDKEY_ANY
- CALG_CYLINK_MEK
- CALG_DES
- CALG_DESX
- CALG_DH_EPHEM
- CALG_DH_SF
- CALG_DSS_SIGN
- CALG_ECDH
- CALG_ECDH_EPHEM
- CALG_ECDSA
- CALG_ECMQV
- CALG_HASH_REPLACE_OWF
- CALG_HUGHES_MD5
- CALG_HMAC
- CALG_KEA_KEYX
- CALG_MAC
- CALG_MD2
- CALG_MD4
- CALG_MD5
- CALG_NO_SIGN
- CALG_OID_INFO_CNG_ONLY
- CALG_OID_INFO_PARAMETERS
- CALG_PCT1_MASTER
- CALG_RC2
- CALG_RC4
- CALG_RC5
- CALG_RSA_KEYX
- CALG_RSA_SIGN
- CALG_SCHANNEL_ENC_KEY
- CALG_SCHANNEL_MAC_KEY
- CALG_SCHANNEL_MASTER_HASH
- CALG_SEAL
- CALG_SHA
- CALG_SHA1
- CALG_SHA_256
- CALG_SHA_384
- CALG_SHA_512
- CALG_SKIPJACK
- CALG_SSL2_MASTER
- CALG_SSL3_MASTER
- CALG_SSL3_SHAMD5
- CALG_TEK
- CALG_TLS1_MASTER
- CALG_TLS1PRF
obj.config("SSLEnabledCipherSuites=*");
obj.config("SSLEnabledCipherSuites=TLS_DHE_DSS_WITH_AES_128_CBC_SHA");
obj.config("SSLEnabledCipherSuites=TLS_DHE_DSS_WITH_AES_128_CBC_SHA;TLS_ECDH_RSA_WITH_AES_128_CBC_SHA");
Possible values when ssl_provider is set to Internal include the following:
- TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDH_ECDSA_WITH_AES_256_GCM_SHA384
- TLS_RSA_WITH_AES_256_GCM_SHA384
- TLS_RSA_WITH_AES_128_GCM_SHA256
- TLS_ECDH_ECDSA_WITH_AES_128_GCM_SHA256
- TLS_DHE_DSS_WITH_AES_256_GCM_SHA384
- TLS_DHE_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDH_RSA_WITH_AES_256_GCM_SHA384
- TLS_ECDH_RSA_WITH_AES_128_GCM_SHA256
- TLS_DHE_RSA_WITH_AES_128_GCM_SHA256
- TLS_DHE_DSS_WITH_AES_128_GCM_SHA256
- TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA384
- TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256
- TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA384
- TLS_DHE_DSS_WITH_AES_256_CBC_SHA256
- TLS_RSA_WITH_AES_256_CBC_SHA256
- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384
- TLS_ECDH_RSA_WITH_AES_256_CBC_SHA384
- TLS_DHE_RSA_WITH_AES_256_CBC_SHA256
- TLS_DHE_RSA_WITH_AES_128_CBC_SHA256
- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256
- TLS_RSA_WITH_AES_128_CBC_SHA256
- TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA256
- TLS_ECDH_RSA_WITH_AES_128_CBC_SHA256
- TLS_DHE_DSS_WITH_AES_128_CBC_SHA256
- TLS_RSA_WITH_AES_256_CBC_SHA
- TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA
- TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA
- TLS_ECDH_ECDSA_WITH_AES_256_CBC_SHA
- TLS_DHE_RSA_WITH_AES_256_CBC_SHA
- TLS_ECDH_RSA_WITH_AES_256_CBC_SHA
- TLS_DHE_DSS_WITH_AES_256_CBC_SHA
- TLS_RSA_WITH_AES_128_CBC_SHA
- TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA
- TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA
- TLS_ECDH_ECDSA_WITH_AES_128_CBC_SHA
- TLS_ECDH_RSA_WITH_AES_128_CBC_SHA
- TLS_DHE_RSA_WITH_AES_128_CBC_SHA
- TLS_DHE_DSS_WITH_AES_128_CBC_SHA
- TLS_ECDHE_ECDSA_WITH_3DES_EDE_CBC_SHA
- TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA
- TLS_ECDH_ECDSA_WITH_3DES_EDE_CBC_SHA
- TLS_ECDH_RSA_WITH_3DES_EDE_CBC_SHA
- TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA
- TLS_DHE_DSS_WITH_3DES_EDE_CBC_SHA
- TLS_RSA_WITH_3DES_EDE_CBC_SHA
- TLS_RSA_WITH_DES_CBC_SHA
- TLS_DHE_RSA_WITH_DES_CBC_SHA
- TLS_DHE_DSS_WITH_DES_CBC_SHA
- TLS_RSA_WITH_RC4_128_MD5
- TLS_RSA_WITH_RC4_128_SHA
When TLS 1.3 is negotiated (see SSLEnabledProtocols), only the following cipher suites are supported:
- TLS_AES_256_GCM_SHA384
- TLS_CHACHA20_POLY1305_SHA256
- TLS_AES_128_GCM_SHA256
SSLEnabledCipherSuites is used together with SSLCipherStrength.
Not all supported protocols are enabled by default. The default value is 4032 for client components, and 3072 for server components. To specify a combination of enabled protocol versions set this config to the binary OR of one or more of the following values:
| TLS1.3 | 12288 (Hex 3000) |
| TLS1.2 | 3072 (Hex C00) (Default - Client and Server) |
| TLS1.1 | 768 (Hex 300) (Default - Client) |
| TLS1 | 192 (Hex C0) (Default - Client) |
| SSL3 | 48 (Hex 30) |
| SSL2 | 12 (Hex 0C) |
Note that only TLS 1.2 is enabled for server components that accept incoming connections. This adheres to industry standards to ensure a secure connection. Client components enable TLS 1.0, TLS 1.1, and TLS 1.2 by default and will negotiate the highest mutually supported version when connecting to a server, which should be TLS 1.2 in most cases.
SSLEnabledProtocols: Transport Layer Security (TLS) 1.3 Notes:
By default when TLS 1.3 is enabled, the struct will use the internal TLS implementation when the ssl_provider is set to Automatic for all editions.
In editions that are designed to run on Windows, ssl_provider can be set to Platform to use the platform implementation instead of the internal implementation. When configured in this manner, please note that the platform provider is supported only on Windows 11/Windows Server 2022 and up. The default internal provider is available on all platforms and is not restricted to any specific OS version.
If set to 1 (Platform provider), please be aware of the following notes:
- The platform provider is available only on Windows 11/Windows Server 2022 and up.
- SSLEnabledCipherSuites and other similar SSL configuration settings are not supported.
- If SSLEnabledProtocols includes both TLS 1.3 and TLS 1.2, these restrictions are still applicable even if TLS 1.2 is negotiated. Enabling TLS 1.3 with the platform provider changes the implementation used for all TLS versions.
SSLEnabledProtocols: SSL2 and SSL3 Notes:
SSL 2.0 and 3.0 are not supported by the struct when the ssl_provider is set to internal. To use SSL 2.0 or SSL 3.0, the platform security API must have the protocols enabled and ssl_provider needs to be set to platform.
This configuration setting is applicable only when ssl_provider is set to Internal.
If set to True, all certificates returned by the server will be present in the Encoded parameter of the on_ssl_server_authentication event. This includes the leaf certificate, any intermediate certificate, and the root certificate.
When set, the struct will save the session secrets in the same format as the SSLKEYLOGFILE environment variable functionality used by most major browsers and tools, such as Chrome, Firefox, and cURL. This file can then be used in tools such as Wireshark to decrypt TLS traffic for debugging purposes. When writing to this file, the struct will only append, it will not overwrite previous values.
NOTE: This configuration setting is applicable only when ssl_provider is set to Internal.
NOTE: For server components (e.g., TCPServer), this is a per-connection configuration setting accessed by passing the ConnectionId. For example:
server.Config("SSLNegotiatedCipher[connId]");
NOTE: For server components (e.g., TCPServer), this is a per-connection configuration setting accessed by passing the ConnectionId. For example:
server.Config("SSLNegotiatedCipherStrength[connId]");
NOTE: For server components (e.g., TCPServer), this is a per-connection configuration setting accessed by passing the ConnectionId. For example:
server.Config("SSLNegotiatedCipherSuite[connId]");
NOTE: For server components (e.g., TCPServer), this is a per-connection configuration setting accessed by passing the ConnectionId. For example:
server.Config("SSLNegotiatedKeyExchange[connId]");
NOTE: For server components (e.g., TCPServer), this is a per-connection configuration setting accessed by passing the ConnectionId. For example:
server.Config("SSLNegotiatedKeyExchangeStrength[connId]");
NOTE: For server components (e.g., TCPServer), this is a per-connection configuration setting accessed by passing the ConnectionId. For example:
server.Config("SSLNegotiatedVersion[connId]");
| 0x00000001 | Ignore time validity status of certificate. |
| 0x00000002 | Ignore time validity status of CTL. |
| 0x00000004 | Ignore non-nested certificate times. |
| 0x00000010 | Allow unknown certificate authority. |
| 0x00000020 | Ignore wrong certificate usage. |
| 0x00000100 | Ignore unknown certificate revocation status. |
| 0x00000200 | Ignore unknown CTL signer revocation status. |
| 0x00000400 | Ignore unknown certificate authority revocation status. |
| 0x00000800 | Ignore unknown root revocation status. |
| 0x00008000 | Allow test root certificate. |
| 0x00004000 | Trust test root certificate. |
| 0x80000000 | Ignore non-matching CN (certificate CN non-matching server name). |
This functionality is currently not available when the provider is OpenSSL.
The value of this configuration setting is a newline-separated (CR/LF) list of certificates. For instance:
-----BEGIN CERTIFICATE----- MIIEKzCCAxOgAwIBAgIRANTET4LIkxdH6P+CFIiHvTowDQYJKoZIhvcNAQELBQAw ... Intermediate Cert... eWHV5OW1K53o/atv59sOiW5K3crjFhsBOd5Q+cJJnU+SWinPKtANXMht+EDvYY2w F0I1XhM+pKj7FjDr+XNj -----END CERTIFICATE----- \r \n -----BEGIN CERTIFICATE----- MIIEFjCCAv6gAwIBAgIQetu1SMxpnENAnnOz1P+PtTANBgkqhkiG9w0BAQUFADBp ... Root Cert... d8q23djXZbVYiIfE9ebr4g3152BlVCHZ2GyPdjhIuLeH21VbT/dyEHHA -----END CERTIFICATE-----
When specified the struct will verify that the server certificate signature algorithm is among the values specified in this configuration setting. If the server certificate signature algorithm is unsupported, the struct fails with an error.
The format of this value is a comma-separated list of hash-signature combinations. For instance:
component.SSLProvider = TCPClientSSLProviders.sslpInternal;
component.Config("SSLEnabledProtocols=3072"); //TLS 1.2
component.Config("TLS12SignatureAlgorithms=sha256-rsa,sha256-dsa,sha1-rsa,sha1-dsa");
The default value for this configuration setting is sha512-ecdsa,sha512-rsa,sha512-dsa,sha384-ecdsa,sha384-rsa,sha384-dsa,sha256-ecdsa,sha256-rsa,sha256-dsa,sha224-ecdsa,sha224-rsa,sha224-dsa,sha1-ecdsa,sha1-rsa,sha1-dsa.
To not restrict the server's certificate signature algorithm, specify an empty string as the value for this configuration setting, which will cause the signature_algorithms TLS 1.2 extension to not be sent.
The default value is ecdhe_secp256r1,ecdhe_secp384r1,ecdhe_secp521r1.
When using TLS 1.2 and ssl_provider is set to Internal, the values refer to the supported groups for ECC. The following values are supported:
- "ecdhe_secp256r1" (default)
- "ecdhe_secp384r1" (default)
- "ecdhe_secp521r1" (default)
The default value is set to balance common supported groups and the computational resources required to generate key shares. As a result, only some groups are included by default in this configuration setting.
NOTE: All supported groups can always be used during the handshake even if not listed here, but if a group is used that is not present in this list, it will incur an additional roundtrip and time to generate the key share for that group.
In most cases, this configuration setting does not need to be modified. This should be modified only if there is a specific reason to do so.
The default value is ecdhe_x25519,ecdhe_secp256r1,ecdhe_secp384r1,ffdhe_2048,ffdhe_3072
The values are ordered from most preferred to least preferred. The following values are supported:
- "ecdhe_x25519" (default)
- "ecdhe_x448"
- "ecdhe_secp256r1" (default)
- "ecdhe_secp384r1" (default)
- "ecdhe_secp521r1"
- "ffdhe_2048" (default)
- "ffdhe_3072" (default)
- "ffdhe_4096"
- "ffdhe_6144"
- "ffdhe_8192"
- "ed25519" (default)
- "ed448" (default)
- "ecdsa_secp256r1_sha256" (default)
- "ecdsa_secp384r1_sha384" (default)
- "ecdsa_secp521r1_sha512" (default)
- "rsa_pkcs1_sha256" (default)
- "rsa_pkcs1_sha384" (default)
- "rsa_pkcs1_sha512" (default)
- "rsa_pss_sha256" (default)
- "rsa_pss_sha384" (default)
- "rsa_pss_sha512" (default)
The default value is ecdhe_x25519,ecdhe_x448,ecdhe_secp256r1,ecdhe_secp384r1,ecdhe_secp521r1,ffdhe_2048,ffdhe_3072,ffdhe_4096,ffdhe_6144,ffdhe_8192
The values are ordered from most preferred to least preferred. The following values are supported:
- "ecdhe_x25519" (default)
- "ecdhe_x448" (default)
- "ecdhe_secp256r1" (default)
- "ecdhe_secp384r1" (default)
- "ecdhe_secp521r1" (default)
- "ffdhe_2048" (default)
- "ffdhe_3072" (default)
- "ffdhe_4096" (default)
- "ffdhe_6144" (default)
- "ffdhe_8192" (default)
Socket Config Settings
NOTE: This option is not valid for User Datagram Protocol (UDP) ports.
Some TCP/IP implementations do not support variable buffer sizes. If that is the case, when the struct is activated the InBufferSize reverts to its defined size. The same happens if you attempt to make it too large or too small.
Some TCP/IP implementations do not support variable buffer sizes. If that is the case, when the struct is activated the OutBufferSize reverts to its defined size. The same happens if you attempt to make it too large or too small.
Base Config Settings
The following is a list of valid code page identifiers:
| Identifier | Name |
| 037 | IBM EBCDIC - U.S./Canada |
| 437 | OEM - United States |
| 500 | IBM EBCDIC - International |
| 708 | Arabic - ASMO 708 |
| 709 | Arabic - ASMO 449+, BCON V4 |
| 710 | Arabic - Transparent Arabic |
| 720 | Arabic - Transparent ASMO |
| 737 | OEM - Greek (formerly 437G) |
| 775 | OEM - Baltic |
| 850 | OEM - Multilingual Latin I |
| 852 | OEM - Latin II |
| 855 | OEM - Cyrillic (primarily Russian) |
| 857 | OEM - Turkish |
| 858 | OEM - Multilingual Latin I + Euro symbol |
| 860 | OEM - Portuguese |
| 861 | OEM - Icelandic |
| 862 | OEM - Hebrew |
| 863 | OEM - Canadian-French |
| 864 | OEM - Arabic |
| 865 | OEM - Nordic |
| 866 | OEM - Russian |
| 869 | OEM - Modern Greek |
| 870 | IBM EBCDIC - Multilingual/ROECE (Latin-2) |
| 874 | ANSI/OEM - Thai (same as 28605, ISO 8859-15) |
| 875 | IBM EBCDIC - Modern Greek |
| 932 | ANSI/OEM - Japanese, Shift-JIS |
| 936 | ANSI/OEM - Simplified Chinese (PRC, Singapore) |
| 949 | ANSI/OEM - Korean (Unified Hangul Code) |
| 950 | ANSI/OEM - Traditional Chinese (Taiwan; Hong Kong SAR, PRC) |
| 1026 | IBM EBCDIC - Turkish (Latin-5) |
| 1047 | IBM EBCDIC - Latin 1/Open System |
| 1140 | IBM EBCDIC - U.S./Canada (037 + Euro symbol) |
| 1141 | IBM EBCDIC - Germany (20273 + Euro symbol) |
| 1142 | IBM EBCDIC - Denmark/Norway (20277 + Euro symbol) |
| 1143 | IBM EBCDIC - Finland/Sweden (20278 + Euro symbol) |
| 1144 | IBM EBCDIC - Italy (20280 + Euro symbol) |
| 1145 | IBM EBCDIC - Latin America/Spain (20284 + Euro symbol) |
| 1146 | IBM EBCDIC - United Kingdom (20285 + Euro symbol) |
| 1147 | IBM EBCDIC - France (20297 + Euro symbol) |
| 1148 | IBM EBCDIC - International (500 + Euro symbol) |
| 1149 | IBM EBCDIC - Icelandic (20871 + Euro symbol) |
| 1200 | Unicode UCS-2 Little-Endian (BMP of ISO 10646) |
| 1201 | Unicode UCS-2 Big-Endian |
| 1250 | ANSI - Central European |
| 1251 | ANSI - Cyrillic |
| 1252 | ANSI - Latin I |
| 1253 | ANSI - Greek |
| 1254 | ANSI - Turkish |
| 1255 | ANSI - Hebrew |
| 1256 | ANSI - Arabic |
| 1257 | ANSI - Baltic |
| 1258 | ANSI/OEM - Vietnamese |
| 1361 | Korean (Johab) |
| 10000 | MAC - Roman |
| 10001 | MAC - Japanese |
| 10002 | MAC - Traditional Chinese (Big5) |
| 10003 | MAC - Korean |
| 10004 | MAC - Arabic |
| 10005 | MAC - Hebrew |
| 10006 | MAC - Greek I |
| 10007 | MAC - Cyrillic |
| 10008 | MAC - Simplified Chinese (GB 2312) |
| 10010 | MAC - Romania |
| 10017 | MAC - Ukraine |
| 10021 | MAC - Thai |
| 10029 | MAC - Latin II |
| 10079 | MAC - Icelandic |
| 10081 | MAC - Turkish |
| 10082 | MAC - Croatia |
| 12000 | Unicode UCS-4 Little-Endian |
| 12001 | Unicode UCS-4 Big-Endian |
| 20000 | CNS - Taiwan |
| 20001 | TCA - Taiwan |
| 20002 | Eten - Taiwan |
| 20003 | IBM5550 - Taiwan |
| 20004 | TeleText - Taiwan |
| 20005 | Wang - Taiwan |
| 20105 | IA5 IRV International Alphabet No. 5 (7-bit) |
| 20106 | IA5 German (7-bit) |
| 20107 | IA5 Swedish (7-bit) |
| 20108 | IA5 Norwegian (7-bit) |
| 20127 | US-ASCII (7-bit) |
| 20261 | T.61 |
| 20269 | ISO 6937 Non-Spacing Accent |
| 20273 | IBM EBCDIC - Germany |
| 20277 | IBM EBCDIC - Denmark/Norway |
| 20278 | IBM EBCDIC - Finland/Sweden |
| 20280 | IBM EBCDIC - Italy |
| 20284 | IBM EBCDIC - Latin America/Spain |
| 20285 | IBM EBCDIC - United Kingdom |
| 20290 | IBM EBCDIC - Japanese Katakana Extended |
| 20297 | IBM EBCDIC - France |
| 20420 | IBM EBCDIC - Arabic |
| 20423 | IBM EBCDIC - Greek |
| 20424 | IBM EBCDIC - Hebrew |
| 20833 | IBM EBCDIC - Korean Extended |
| 20838 | IBM EBCDIC - Thai |
| 20866 | Russian - KOI8-R |
| 20871 | IBM EBCDIC - Icelandic |
| 20880 | IBM EBCDIC - Cyrillic (Russian) |
| 20905 | IBM EBCDIC - Turkish |
| 20924 | IBM EBCDIC - Latin-1/Open System (1047 + Euro symbol) |
| 20932 | JIS X 0208-1990 & 0121-1990 |
| 20936 | Simplified Chinese (GB2312) |
| 21025 | IBM EBCDIC - Cyrillic (Serbian, Bulgarian) |
| 21027 | Extended Alpha Lowercase |
| 21866 | Ukrainian (KOI8-U) |
| 28591 | ISO 8859-1 Latin I |
| 28592 | ISO 8859-2 Central Europe |
| 28593 | ISO 8859-3 Latin 3 |
| 28594 | ISO 8859-4 Baltic |
| 28595 | ISO 8859-5 Cyrillic |
| 28596 | ISO 8859-6 Arabic |
| 28597 | ISO 8859-7 Greek |
| 28598 | ISO 8859-8 Hebrew |
| 28599 | ISO 8859-9 Latin 5 |
| 28605 | ISO 8859-15 Latin 9 |
| 29001 | Europa 3 |
| 38598 | ISO 8859-8 Hebrew |
| 50220 | ISO 2022 Japanese with no halfwidth Katakana |
| 50221 | ISO 2022 Japanese with halfwidth Katakana |
| 50222 | ISO 2022 Japanese JIS X 0201-1989 |
| 50225 | ISO 2022 Korean |
| 50227 | ISO 2022 Simplified Chinese |
| 50229 | ISO 2022 Traditional Chinese |
| 50930 | Japanese (Katakana) Extended |
| 50931 | US/Canada and Japanese |
| 50933 | Korean Extended and Korean |
| 50935 | Simplified Chinese Extended and Simplified Chinese |
| 50936 | Simplified Chinese |
| 50937 | US/Canada and Traditional Chinese |
| 50939 | Japanese (Latin) Extended and Japanese |
| 51932 | EUC - Japanese |
| 51936 | EUC - Simplified Chinese |
| 51949 | EUC - Korean |
| 51950 | EUC - Traditional Chinese |
| 52936 | HZ-GB2312 Simplified Chinese |
| 54936 | Windows XP: GB18030 Simplified Chinese (4 Byte) |
| 57002 | ISCII Devanagari |
| 57003 | ISCII Bengali |
| 57004 | ISCII Tamil |
| 57005 | ISCII Telugu |
| 57006 | ISCII Assamese |
| 57007 | ISCII Oriya |
| 57008 | ISCII Kannada |
| 57009 | ISCII Malayalam |
| 57010 | ISCII Gujarati |
| 57011 | ISCII Punjabi |
| 65000 | Unicode UTF-7 |
| 65001 | Unicode UTF-8 |
| Identifier | Name |
| 1 | ASCII |
| 2 | NEXTSTEP |
| 3 | JapaneseEUC |
| 4 | UTF8 |
| 5 | ISOLatin1 |
| 6 | Symbol |
| 7 | NonLossyASCII |
| 8 | ShiftJIS |
| 9 | ISOLatin2 |
| 10 | Unicode |
| 11 | WindowsCP1251 |
| 12 | WindowsCP1252 |
| 13 | WindowsCP1253 |
| 14 | WindowsCP1254 |
| 15 | WindowsCP1250 |
| 21 | ISO2022JP |
| 30 | MacOSRoman |
| 10 | UTF16String |
| 0x90000100 | UTF16BigEndian |
| 0x94000100 | UTF16LittleEndian |
| 0x8c000100 | UTF32String |
| 0x98000100 | UTF32BigEndian |
| 0x9c000100 | UTF32LittleEndian |
| 65536 | Proprietary |
- Product: The product the license is for.
- Product Key: The key the license was generated from.
- License Source: Where the license was found (e.g., RuntimeLicense, License File).
- License Type: The type of license installed (e.g., Royalty Free, Single Server).
- Last Valid Build: The last valid build number for which the license will work.
This setting only works on these structs: AS3Receiver, AS3Sender, Atom, Client(3DS), FTP, FTPServer, IMAP, OFTPClient, SSHClient, SCP, Server(3DS), Sexec, SFTP, SFTPServer, SSHServer, TCPClient, TCPServer.
Setting this configuration setting to true tells the struct to use the internal implementation instead of using the system security libraries.
On Windows, this setting is set to false by default. On Linux/macOS, this setting is set to true by default.
To use the system security libraries for Linux, OpenSSL support must be enabled. For more information on how to enable OpenSSL, please refer to the OpenSSL Notes section.
Trappable Errors (HadoopDFS Struct)
Common Errors
| 600 | A server error occurred, and/or the struct was unable to process the server's response. Please refer to the error message for more information. |
| 601 | An unsupported operation or action was attempted. |
| 602 | The RawRequest or RawResponse configuration setting was queried without first setting the TransferredRequest configuration setting to ON. |
| 603 | The login credentials specified were invalid. Please refer to the error message for more information. |
| 604 | An invalid remote resource identifier (i.e., a name, path, Id, etc.) was specified. |
| 605 | An invalid index was specified. |
| 606 | An upload was aborted by the user before it could finish. |
| 607 | The specified resource is a folder and cannot be downloaded. |
| 608 | A download failed because the specified local_file already exists and overwrite is false. |
| 609 | The struct could not resume a download or upload. Please refer to the error message for more information. |
| 610 | An encrypted download could not be resumed because the DownloadTempFile configuration setting is not set. |
| 611 | An exception occurred while working with the specified local_file (or the current value of local_file is invalid). Please refer to the error message for more information. |
| 612 | An exception occurred while working with the specified upload or download stream. Please refer to the error message for more information. |
The class may also return one of the following error codes, which are inherited from other classes.
HTTP Errors
| 118 | Firewall error. The error description contains the detailed message. |
| 143 | Busy executing current method. |
| 151 | HTTP protocol error. The error message has the server response. |
| 152 | No server specified in url. |
| 153 | Specified url_scheme is invalid. |
| 155 | Range operation is not supported by server. |
| 156 | Invalid cookie index (out of range). |
| 301 | Interrupted. |
| 302 | Cannot open attached_file. |
The class may also return one of the following error codes, which are inherited from other classes.
TCPClient Errors
| 100 | You cannot change the remote_port at this time. A connection is in progress. |
| 101 | You cannot change the remote_host (Server) at this time. A connection is in progress. |
| 102 | The remote_host address is invalid (0.0.0.0). |
| 104 | Already connected. If you want to reconnect, close the current connection first. |
| 106 | You cannot change the local_port at this time. A connection is in progress. |
| 107 | You cannot change the local_host at this time. A connection is in progress. |
| 112 | You cannot change MaxLineLength at this time. A connection is in progress. |
| 116 | remote_port cannot be zero. Please specify a valid service port number. |
| 117 | You cannot change the UseConnection option while the struct is active. |
| 135 | Operation would block. |
| 201 | Timeout. |
| 211 | Action impossible in control's present state. |
| 212 | Action impossible while not connected. |
| 213 | Action impossible while listening. |
| 301 | Timeout. |
| 302 | Could not open file. |
| 434 | Unable to convert string to selected CodePage. |
| 1105 | Already connecting. If you want to reconnect, close the current connection first. |
| 1117 | You need to connect first. |
| 1119 | You cannot change the LocalHost at this time. A connection is in progress. |
| 1120 | Connection dropped by remote host. |
SSL Errors
| 270 | Cannot load specified security library. |
| 271 | Cannot open certificate store. |
| 272 | Cannot find specified certificate. |
| 273 | Cannot acquire security credentials. |
| 274 | Cannot find certificate chain. |
| 275 | Cannot verify certificate chain. |
| 276 | Error during handshake. |
| 280 | Error verifying certificate. |
| 281 | Could not find client certificate. |
| 282 | Could not find server certificate. |
| 283 | Error encrypting data. |
| 284 | Error decrypting data. |
TCP/IP Errors
| 10004 | [10004] Interrupted system call. |
| 10009 | [10009] Bad file number. |
| 10013 | [10013] Access denied. |
| 10014 | [10014] Bad address. |
| 10022 | [10022] Invalid argument. |
| 10024 | [10024] Too many open files. |
| 10035 | [10035] Operation would block. |
| 10036 | [10036] Operation now in progress. |
| 10037 | [10037] Operation already in progress. |
| 10038 | [10038] Socket operation on nonsocket. |
| 10039 | [10039] Destination address required. |
| 10040 | [10040] Message is too long. |
| 10041 | [10041] Protocol wrong type for socket. |
| 10042 | [10042] Bad protocol option. |
| 10043 | [10043] Protocol is not supported. |
| 10044 | [10044] Socket type is not supported. |
| 10045 | [10045] Operation is not supported on socket. |
| 10046 | [10046] Protocol family is not supported. |
| 10047 | [10047] Address family is not supported by protocol family. |
| 10048 | [10048] Address already in use. |
| 10049 | [10049] Cannot assign requested address. |
| 10050 | [10050] Network is down. |
| 10051 | [10051] Network is unreachable. |
| 10052 | [10052] Net dropped connection or reset. |
| 10053 | [10053] Software caused connection abort. |
| 10054 | [10054] Connection reset by peer. |
| 10055 | [10055] No buffer space available. |
| 10056 | [10056] Socket is already connected. |
| 10057 | [10057] Socket is not connected. |
| 10058 | [10058] Cannot send after socket shutdown. |
| 10059 | [10059] Too many references, cannot splice. |
| 10060 | [10060] Connection timed out. |
| 10061 | [10061] Connection refused. |
| 10062 | [10062] Too many levels of symbolic links. |
| 10063 | [10063] File name is too long. |
| 10064 | [10064] Host is down. |
| 10065 | [10065] No route to host. |
| 10066 | [10066] Directory is not empty |
| 10067 | [10067] Too many processes. |
| 10068 | [10068] Too many users. |
| 10069 | [10069] Disc Quota Exceeded. |
| 10070 | [10070] Stale NFS file handle. |
| 10071 | [10071] Too many levels of remote in path. |
| 10091 | [10091] Network subsystem is unavailable. |
| 10092 | [10092] WINSOCK DLL Version out of range. |
| 10093 | [10093] Winsock is not loaded yet. |
| 11001 | [11001] Host not found. |
| 11002 | [11002] Nonauthoritative 'Host not found' (try again or check DNS setup). |
| 11003 | [11003] Nonrecoverable errors: FORMERR, REFUSED, NOTIMP. |
| 11004 | [11004] Valid name, no data record (check DNS setup). |