Triple DES (3DES) is the common name for the Triple Data Encryption Algorithm (TDEA or Triple DEA) symmetric-key block cipher, which applies the Data Encryption Standard (DES) cipher algorithm three times to each data block. While in theory it has 168 bits of security, the practical security it provides is only 112 bits. To make things worse, there are known attacks against it, so that effectively it compares to about 80 bits security. Do not use!
- IEEE 802.11
IEEE 802.11 is a set of Media Access Control (MAC) and physical layer (PHY) specifications for implementing wireless local area network (WLAN) computer communication in the 900 MHz and 2.4, 3.6, 5, and 60 GHz frequency bands. They are the world’s most widely used wireless computer networking standards, used in most home and office networks to allow laptops, printers, and smartphones to talk to each other and access the Internet without connecting wires. They are created and maintained by the Institute of Electrical and Electronics Engineers (IEEE) LAN/MAN Standards Committee (IEEE 802). The base version of the standard was released in 1997, and has had subsequent amendments. The standard and amendments provide the basis for wireless network products using the Wi-Fi brand. While each amendment is officially revoked when it is incorporated in the latest version of the standard, the corporate world tends to market to the revisions because they concisely denote capabilities of their products. As a result, in the marketplace, each revision tends to become its own standard.
See also 802.11a, 802.11b, 802.11g, 802.11n, 802.11ac, 802.11ax,
- IEEE 802.11ac
IEEE 802.11ac is a wireless networking standard in the 802.11 family (which is marketed under the brand name Wi-Fi), developed in the IEEE Standards Association, providing high-throughput wireless local area networks (WLAN) on the 5 GHz band. The standard was developed from 2008 through 2013 and published in December 2013.
The specification has multi-station throughput of at least 1 Gbit/s and single-link throughput of at least 500 Mbit/s. This is accomplished by extending the air-interface concepts embraced by 802.11n: wider RF bandwidth (up to 160 MHz), more MIMO spatial streams (up to eight), downlink multi-user MIMO (up to four clients), and high-density modulation (up to 256-QAM).
The first 802.11ac products from 2013 are referred to as Wave 1, and the newer higher bandwidth products introduced in 2016 are referred to as Wave 2.
- IEEE 802.11ax
IEEE 802.11ax is a type of WLAN in the IEEE 802.11 set of types of WLAN. IEEE 802.11ax is designed to operate in the already existing 2.4 GHz and 5 GHz spectrums. In addition to utilizing MIMO and MU-MIMO, the new amendment introduces OFDMA to improve overall spectral efficiency, and higher order 1024 QAM modulation support for increased throughput. Though the nominal data rate is just 37% higher than IEEE 802.11ac, the new amendment is expected to achieve a 4x increase to user throughput—due to more efficient spectrum utilization.
IEEE 802.11ax is due to be publicly released sometime in 2019. Devices were presented at CES 2018 that showed a top speed of 11 Gbps.
- IEEE 802.11b
IEEE 802.11b-1999 or 802.11b, is an amendment to the IEEE 802.11 wireless networking specification that extends throughput up to 11 Mbit/s using the same 2.4GHz band. A related amendment was incorporated into the IEEE 802.11-2007 standard.
- IEEE 802.11e
See also WMM.
IEEE 802.11e-2005 or 802.11e is an approved amendment to the IEEE 802.11 standard that defines a set of Quality of Service (QoS) enhancements for wireless LAN applications through modifications to the Media Access Control (MAC) layer. The standard is considered of critical importance for delay-sensitive applications, such as Voice over Wireless LAN and streaming multimedia. The amendment has been incorporated into the published IEEE 802.11-2007 standard.
- IEEE 802.11g
IEEE 802.11g-2003 or 802.11g is an amendment to the IEEE 802.11 specification that extended throughput to up to 54 Mbit/s using the same 2.4 GHz band as 802.11b. This specification under the marketing name of Wi-Fi has been implemented all over the world. The 802.11g protocol is now Clause 19 of the published IEEE 802.11-2007 standard, and Clause 19 of the published IEEE 802.11-2012 standard.
- IEEE 802.11n
IEEE 802.11n-2009, commonly shortened to 802.11n, is a wireless-networking standard that uses multiple antennas to increase data rates. Sometimes referred to as MIMO, which stands for “multiple input and multiple output”, it is an amendment to the IEEE 802.11-2007 wireless-networking standard. Its purpose is to improve network throughput over the two previous standards — 802.11a and 802.11g — with a significant increase in the maximum net data rate from 54 Mbit/s to 600 Mbit/s (slightly higher gross bit rate including for example error-correction codes, and slightly lower maximum throughput) with the use of four spatial streams at a channel width of 40 MHz. 802.11n standardized support for multiple-input multiple-output, frame aggregation, and security improvements, among other features. It can be used in the 2.4 GHz or 5 GHz frequency bands.
Development of 802.11n began in 2002, seven years before publication. The 802.11n protocol is now Clause 20 of the published IEEE 802.11-2012 standard.
- Management Frame Protection
IEEE 802.11w-2009 is an approved amendment to the IEEE 802.11 standard to increase the security of its management frames.
- Automated Certificate Management Environment
The Automatic Certificate Management Environment (ACME) protocol is a communications protocol for automating interactions between certificate authorities and their users’ web servers, allowing the automated deployment of public key infrastructure at very low cost. It was designed by the Internet Security Research Group (ISRG) for their Let’s Encrypt service.
The protocol, based on passing JSON-formatted messages over HTTPS, has been published as an Internet Standard in RFC 8555 by its own chartered IETF working group.
- Advanced Encryption Standard
The Advanced Encryption Standard (AES) is a is a symmetric-key algorithm for the encryption of electronic data established by a U.S. Governement institution (NIST) in 2001. AES has been adopted by the U.S. government for top secret information and is used worldwide today. It supersedes the Data Encryption Standard (DES).
- Advanced Encryption Standard Instruction Set
Advanced Encryption Standard Instruction Set (or AES-NI) is an extension of the x86 CPU architecture from Intel and AMD. It accelarates data encryption and decryption if the Advanced Encryption Standard (AES) is used by an application.
- AMD Platform Security Processor
- AMD PSP
- AMD Secure Technology
The AMD Platform Security Processor (PSP), officially known as AMD Secure Technology, is a trusted execution environment subsystem incorporated since about 2013 into all AMD microprocessors. According to an AMD developer’s guide, the subsystem is “responsible for creating, monitoring and maintaining the security environment” and “its functions include managing the boot process, initializing various security related mechanisms, and monitoring the system for any suspicious activity or events and implementing an appropriate response.” Critics worry it can be used as a backdoor and is a security concern.
AMD has denied requests to open source the code that runs on the PSP.
The PSP is similar to the Intel Management Engine for Intel processors.
- Authenticated Received Chain
(ARC) is an email authentication system designed to allow an intermediate mail server like a mailing list or forwarding service to sign an email’s original authentication results. This allows a receiving service to validate an email when the email’s SPF and DKIM records are rendered invalid by an intermediate server’s processing.
ARC is currently an Internet Draft with the IETF.
DMARC allows a sender’s domain to indicate that their emails are protected by SPF and/or DKIM, and tells a receiving service what to do if neither of those authentication methods passes - such as to reject the message. However, a strict DMARC policy may block legitimate emails sent through a mailing list or forwarder, as the SPF check will fail due to the unapproved sender, and the DKIM signature will be invalidated if the message is modified, such as by adding a subject tag or footer.
ARC helps solve this problem by giving intermediate servers a way to sign the original message’s validation results. Even if the SPF and DKIM validation fail, the receiving service can choose to validate the ARC. If the ARC indicates that the original message passed the SPF and DKIM checks, and the only modifications were made by intermediaries trusted by the receiving service, the receiving service may choose to accept the email.
- DNS zone transfer
DNS zone transfer, also sometimes known by the inducing DNS query type AXFR, is a type of DNS transaction. A zone transfer uses TCP for transport, and takes the form of a client–server transaction. The client requesting a zone transfer may be a slave server or secondary server, requesting data from a master server, sometimes called a primary server. The portion of the database that is replicated is a zone. Avoid if possible and use other more secure replication methods. See also What are zone transfers? from Daniel Bernstein.
- Bayesian Filter
- Bayesian Filtering
- Bayesian Spam Filter
A Bayesian spam filter (after Rev. Thomas Bayes) is a statistical technique of e-mail filtering. In its basic form, it makes use of a naive Bayes classifier on bag of words features to identify spam e-mail, an approach commonly used in text classification.
- Beacon Broadcast interval
- Beacon Interval
Beacon Broadcast interval is the time lag between each of the beacons sent by your router or access points. By definition, the lower the value, the smaller the time lag which means that the beacon is sent more frequently. The higher the value, the bigger the time lag which means that the beacon is sent broadcasted less frequently.
The beacon is needed for your devices or clients to receive information about the particular router. In this case the beacon includes some main information such as SSID, Timestamp, and various parameters.
Blowfish is a symmetric-key block cipher, designed in 1993 by Bruce Schneier and included in a large number of cipher suites and encryption products. Blowfish provides a good encryption rate in software and no effective cryptanalysis of it has been found to date. However, the Advanced Encryption Standard (AES) now receives more attention. Blowfish users are encouraged by Bruce Schneier, Blowfish’s creator, to use the more modern and computationally efficient alternative Twofish.
- Basic Service Set Identifier
An infrastructure mode wireless network consists of one ore more redistribution points — typically access points — together with one or more “client” stations that are associated with (i.e. connected to) that redistribution point.
Each access point has its own unique identifier, a BSSID, which is a unique 48-bit identifier that follows MAC Address conventions and is usually non-configurable.
- Certificate Authority
- CCM mode Protocol
- Counter Mode CBC-MAC Protocol
- Counter Mode Cipher Block Chaining Message Authentication Code Protocol
CCMP is an encryption protocol designed for Wireless LAN products that implements the standards of the IEEE 802.11i amendment to the original IEEE 802.11 standard. CCMP is an enhanced data cryptographic encapsulation mechanism designed for data confidentiality and based upon the Counter Mode with CBC-MAC (CCM mode) of the Advanced Encryption Standard (AES) standard. It was created to address the vulnerabilities presented by Wired Equivalent Privacy (WEP), a dated, insecure protocol.
CCMP is the standard encryption protocol for use with the Wi-Fi Protected Access II (WPA2) standard and is much more secure than the Wired Equivalent Privacy (WEP) protocol and Temporal Key Integrity Protocol (TKIP) of Wi-Fi Protected Access (WPA).
- Chip card
- Integrated Circuit Card
- Smart card
A pocket-sized plastic card with embedded integrated circuits. Smart cards can provide identification, authentication, data storage and application processing. See the Wikipedia article for many possible usage scenarios.
- Cipher Suite
A cipher suite is a standardized collection of key exchange algorithms, encryption algorithms (ciphers) and Message authentication codes (MAC) algorithm that provides authenticated encryption schemes. For more information see [KAea14b].
Composer is a tool for dependency management in PHP. It allows a developer to declare the dependent libraries a project needs and it will install them along the project.
- Cryptographic Hash Function
A cryptographic hash function is a Hash Function which is considered practically impossible to invert, that is, to recreate the input data from its hash value alone. They are used for digital signatures, Message Authentication Codes (MAC), and other forms of authentication. It can also be used as ordinary hash function, to index data in hash tables, for fingerprinting, to detect duplicate data or uniquely identify files, and as checksums to detect accidental data corruption. Cryptographic hash values are sometimes called (digital) fingerprints, checksums, or just hash values. Some widely used ones are: MD5, SHA-1, SHA-256
In cryptography, Curve25519 is an elliptic curve offering 128 bits of security and designed for use with the Elliptic Curve Diffie–Hellman (ECDH) key agreement scheme. It is one of the fastest ECC curves and is not covered by any known patents. Curve25519 was first released by Daniel J. Bernstein in 2005, but interest increased considerably after 2013 when it was discovered that the NSA had implemented a backdoor into Dual EC DRBG. While not directly related, suspicious aspects of the NIST P curves led to concerns that the NSA had chosen values that gave them an advantage in factoring public keys.
Long-running programs usually running in the background and providing services for other programs and or clients on other systems connected by a network. Daemons typically are started automatically on system boot and run on their own, without any user interaction.
- DNS-based Authentication of Named Entities
DNS based Authentication of Named Entities (DANE) is a protocol to allow X.509 certificates, commonly used for Transport Layer Security (TLS), to be bound to DNS names using Domain Name System Security Extensions (DNSSEC). It is proposed in RFC 6698 as a way to authenticate TLS client and server entities without a Certificate Authority (CA).
- Data deduplication
In computing, data deduplication is a technique for eliminating duplicate copies of repeating data. Thereby dramatically reducing the required storage space. It can also be applied to network data transfers to reduce the number of bytes that must be transferred.
The deduplication process cuts the data to be stored into equal sized ‘chunks’. These chunks are then compared to other chunks already stored earlier. Whenever a match occurs, the new chunk is replaced with a small reference that points to the already stored chunk, instead of storing it again. Given that the same byte pattern may occur dozens, hundreds, or even thousands of times (depending on the used chunk size), the amount of data that must be stored or transferred can be greatly reduced.
- Data Encryption Standard
The Data Encryption Standard (DES) is a previously predominant symmetric-key algorithm for the encryption of electronic data. It is now considered to be insecure. This is chiefly due to the 56-bit key size being too small; in January, 1999, distributed.net and the Electronic Frontier Foundation collaborated to publicly break a DES key in 22 hours and 15 minutes. The cipher has been superseded by the Advanced Encryption Standard (AES) and has been withdrawn as a standard. DES was developed in the early 1970s at IBM. Do not use!
- Diffie-Hellman Key Exchange
Diffie–Hellman key exchange (DH) is a specific method of exchanging cryptographic keys. The method allows two parties that have no prior knowledge of each other to jointly establish a shared secret key over an insecure communications channel. This key can then be used to encrypt subsequent communications using a symmetric key cipher. Youtube has a great video that explains it in 5 minutes.
- DH Parameters
DH parameters are pre-generated large prime-numbers, which accelerates the generatation of session keys while using Diffie-Hellman Key Exchange. To find and evaluate such prime numbers takes a long time (up to several minutes). Using pre-generated values allows to establish session keys during initial handshake and periodic renevals, without any noticeable delay.
Diceware is a method for creating passphrases, passwords, and other cryptographic variables using ordinary dice as a hardware random number generator. For each word in the passphrase, five rolls of the dice are required. The numbers from 1 to 6 that come up in the rolls are assembled as a five-digit number, e.g. 43146. That number is then used to look up a word in a word list. In the English list 43146 corresponds to munch. By generating several words in sequence, a lengthy passphrase can be constructed.
A Diceware word list is any list of 6^5 = 7,776 unique words, preferably ones the user will find easy to spell and to remember. The contents of the word list do not have to be protected or concealed in any way, as the security of a Diceware passphrase is in the number of words selected, and the number of words each selected word could be taken from. Lists have been compiled for several languages.
See also the original Diceware Passphrase Home Page or the urown.net Diceware installation.
- Digital Fingerprint
- Digital Signature Standard
The Digital Signature Standard (DSS) is a United States Federal Information Processing Standard specifying a suite of algorithms that can be used to generate digital signatures established by the National Institute of Standards and Technology (NIST) in 1994. Four revisions to the initial specification have been released: FIPS 186-1 in 1996, FIPS 186-2 in 2000, FIPS 186-3 in 2009, and FIPS 186-4 in 2013.
It defines the Digital Signature Algorithm (DSA), contains a definition of RSA signatures based on the definitions contained within PKCS #1 version 2.1 and in American National Standard X9.31 with some additional requirements, and contains a definition of the Elliptic Curve Digital Signature Algorithm based on the definition provided by American National Standard X9.62 with some additional requirements and some recommended elliptic curves. It also approves the use of all three algorithms.
- Distance Optimization
A configuration option in wireless networks. The distance between the wireless access point and the furthest wireless client in meters.
- DomainKeys Identified Mail
DomainKeys Identified Mail (DKIM) is an email authentication method designed to detect forged sender addresses in emails (email spoofing), a technique often used in phishing and email spam.
DKIM allows the receiver to check that an email claimed to have come from a specific domain was indeed authorized by the owner of that domain. It achieves this by affixing a digital signature, linked to a domain name, to each outgoing email message. The recipient system can verify this by looking up the sender’s public key published in the DNS. A valid signature also guarantees that some parts of the email (possibly including attachments) have not been modified since the signature was affixed. Usually, DKIM signatures are not visible to end-users, and are affixed or verified by the infrastructure rather than the message’s authors and recipients.
DKIM is now an “Internet standard”. It is defined in RFC 6376, dated September 2011; with updates in RFC 8301 and RFC 8463.
- Domain-based Message Authentication, Reporting and Conformance
DMARC (Domain-based Message Authentication, Reporting and Conformance) is an email authentication protocol. It is designed to give email domain owners the ability to protect their domain from unauthorized use, commonly known as email spoofing. The purpose and primary outcome of implementing DMARC is to protect a domain from being used in business email compromise attacks, phishing emails, email scams and other cyber threat activities.
Once the DMARC DNS entry is published, any receiving email server can authenticate the incoming email based on the instructions published by the domain owner within the DNS entry. If the email passes the authentication it will be delivered and can be trusted. If the email fails the check, depending on the instructions held within the DMARC record the email could be delivered, quarantined or rejected.
DMARC extends two existing mechanisms, Sender Policy Framework (SPF) and DomainKeys Identified Mail (DKIM). It allows the administrative owner of a domain to publish a policy in their DNS records to specify which mechanism (DKIM, SPF or both) is employed when sending email from that domain; how to check the From: field presented to end users; how the receiver should deal with failures - and a reporting mechanism for actions performed under those policies.
DMARC is defined in RFC 7489, dated March 2015, as “Informational”.
- Domain Name System
- DNS Resolver
The client side of the DNS is called a DNS resolver. A resolver is responsible for initiating and sequencing the queries that ultimately lead to a full resolution (translation) of the resource sought, e.g., translation of a domain name into an IP address. DNS resolvers are classified by a variety of query methods, such as recursive, non-recursive, and iterative. A resolution process may use a combination of these methods.
DNS over HTTPS (DoH) is a protocol for performing remote Domain Name System (DNS) resolution via the HTTPS protocol. A goal of the method is to increase user privacy and security by preventing eavesdropping and manipulation of DNS data by man-in-the-middle attacks by using the HTTPS protocol to encrypt the data between the DoH client and the DoH-based DNS resolver. By March of 2018, Google and the Mozilla Foundation had started testing versions of DNS over HTTPS. In February 2020, Firefox switched to DNS over HTTPS by default for users in the United States.
DNS over TLS (DoT) is a security protocol for encrypting and wrapping Domain Name System (DNS) queries and answers via the Transport Layer Security (TLS) protocol. The goal of the method is to increase user privacy and security by preventing eavesdropping and manipulation of DNS data via man-in-the-middle attacks.
DNSCrypt is a network protocol that authenticates and encrypts Domain Name System (term:DNS) traffic between the user’s computer and recursive name servers. It was originally designed by Frank Denis and Yecheng Fu. Although multiple client and server implementations exist, the protocol was never proposed to the Internet Engineering Task Force (IETF) by the way of a Request for Comments (RFC). DNSCrypt wraps unmodified DNS traffic between a client and a DNS resolver in a cryptographic construction in order to detect forgery. Though it doesn’t provide end-to-end security, it protects the local network against man-in-the-middle attacks. It also mitigates UDP-based amplification attacks by requiring a question to be at least as large as the corresponding response. Thus, DNSCrypt helps to prevent DNS amplification attacks.
- Domain Name System Security Extensions
The Domain Name System Security Extensions (DNSSEC) is a suite of Internet Engineering Task Force (IETF) specifications for securing certain kinds of information provided by the Domain Name System (DNS) as used on Internet Protocol (IP) networks. It is a set of extensions to DNS which provide to DNS clients (resolvers) origin authentication of DNS data, authenticated denial of existence, and data integrity, but not availability or confidentiality.
- Digital Signature Algorithm
The Digital Signature Algorithm (DSA) is a United States Federal Information Processing Standard for digital signatures. In August 1991 the National Institute of Standards and Technology (NIST) proposed DSA for use in their Digital Signature Standard (DSS) and adopted it 1994 in its FIPS standards specification. Four revisions to the initial specification have been released in 1996, 2000, 2009 and in 2013.
DSA is covered by a U.S. Patent and attributed to a former NSA employee. The patent was given to the United States, and NIST has made it available worldwide royalty-free. DSA is a variant of the ElGamal signature scheme.
- DiskStation Manager
Synology’s primary product is the Synology DiskStation Manager (DSM), a Linux based software package that is the operating system for the DiskStation and RackStation products.
- DTIM Interval
- Delivery traffic indication map
- Delivery traffic indication message
DTIM stands for Delivery traffic indication map or message. It is basically an additional message added after the normal beacon broadcast by your router or access point. See Beacon Interval.
Depending on the timing set for your router, the router “buffers” broadcast and multicast data and let your mobile devices or clients know when to “wake up” to receive those data.
The more often that DTIM is transmitted, the more often that your mobile devices wake up, and the more battery that it uses (due to the lack of “sleep”). By setting a low value of DTIM and beacon interval, you can effectively keep your devices awake indefinitely so they never go into sleep mode when idling. In some cases the “no sleep” setup can use up to 10~20% additional power consumption.
- Dual EC DRBG
- Dual Elliptic Curve Deterministic Random Bit Generator
Dual EC DRBG (Dual Elliptic Curve Deterministic Random Bit Generator) is an algorithm that was presented as a cryptographically secure pseudorandom number generator (CSPRNG) using methods in Elliptic Curve Cryptography. Despite wide public criticism, including a potential backdoor, for seven years it was one of the four (now three) CSPRNGs standardized in NIST SP 800-90A as originally published circa June 2006, until withdrawn in 2014.
- Elliptic Curve Cryptography
- Elliptic-Curve Cryptography
Elliptic Curve Cryptography (ECC) is an approach to public-key cryptography based on the algebraic structure of elliptic curves over finite fields. ECC requires smaller keys compared to non-ECC cryptography (based on plain Galois fields) to provide equivalent security.
- Elliptic Curve Diffie–Hellman
- Elliptic-Curve Diffie–Hellman
Elliptic Curve Diffie–Hellman (ECDH) is an anonymous key agreement protocol that allows two parties, each having an Elliptic Curve public–private key pair, to establish a shared secret over an insecure channel. This shared secret may be directly used as a key, or better yet, to derive another key which can then be used to encrypt subsequent communications using a symmetric key cipher. It is a variant of the Diffie-Hellman Key Exchange using Elliptic Curve Cryptography.
- Elliptic Curve Digital Signature Algorithm
In cryptography, the Elliptic Curve Digital Signature Algorithm (ECDSA) offers a variant of the Digital Signature Algorithm (DSA) which uses Elliptic Curve Cryptography.
In public-key cryptography, Edwards-curve Digital Signature Algorithm (EdDSA) is a digital signature scheme using a variant of Schnorr signature based on Twisted Edwards curves. It is designed to be faster than existing digital signature schemes without sacrificing security. It was developed by a team including Daniel J. Bernstein, Niels Duif, Tanja Lange, Peter Schwabe, and Bo-Yin Yang. The reference implementation is public domain software.
- Electrically Erasable Programmable Read-Only Memory
EEPROM (also E2PROM) stands for electrically erasable programmable read-only memory and is a type of non-volatile memory used in computers, integrated in microcontrollers for smart cards and remote keyless systems, and other electronic devices to store relatively small amounts of data but allowing individual bytes to be erased and reprogrammed.
- Electronic Frontier Foundation
The Electronic Frontier Foundation (EFF) is an international non-profit digital rights group based in San Francisco, California. The foundation was formed in July 1990 by John Gilmore, John Perry Barlow and Mitch Kapor to promote Internet civil liberties.
- Erasable Programmable Read-only Memory
An EPROM (rarely EROM), or erasable programmable Read-Only Memory, is a type of programmable Read-Only Memory (PROM) chip that retains its data when its power supply is switched off. Computer memory that can retrieve stored data after a power supply has been turned off and back on is called non-volatile. It is an array of floating-gate transistors individually programmed by an electronic device that supplies higher voltages than those normally used in digital circuits. Once programmed, an EPROM can be erased by exposing it to strong ultraviolet light source (such as from a mercury-vapor lamp). EPROMs are easily recognizable by the transparent fused quartz window in the top of the package, through which the silicon chip is visible, and which permits exposure to ultraviolet light during erasing.
Extended SMTP (ESTMP) includes additions made to SMTP who where defined in 2008 in RFC 5321. It is in widespread use today. Like SMTP, ESMTP uses TCP port 25.
- Filter Bubble
A filter bubble is a result of a personalized search in which a website algorithm selectively guesses what information a user would like to see based on information about the user (such as location, past click behavior and search history) and, as a result, users become separated from information that disagrees with their viewpoints, effectively isolating them in their own cultural or ideological bubbles. The term was coined by internet activist Eli Pariser in his book by the same name [ARNea]. The bubble effect may have negative implications for civic discourse, according to Pariser, but there are contrasting views suggesting the effect is minimal and addressable.
Federal Information Processing Standards (FIPS) are publicly announced standards developed by the US Government trough its National Institute of Standards and Technology (NIST) for use in computer systems by non-military government agencies and government contractors.
FIPS standards are issued to establish requirements for various purposes such as ensuring computer security and interoperability, and are intended for cases in which suitable industry standards do not already exist. Many FIPS specifications are modified versions of standards used in the technical communities, such as the American National Standards Institute (ANSI), the Institute of Electrical and Electronics Engineers (IEEE), and the International Organization for Standardization (ISO).
These include amongst others, encryption standards, such as the Digital Signature Algorithm (DSA), Data Encryption Standard (DES) and the Advanced Encryption Standard (AES).
Firmware is essentially software that is very closely tied to specific hardware, and unlikely to need frequent updates. Typically stored in non-volatile memory chips such as ROM, EPROM, or flash memory. Since it can only be updated or replaced by special procdures designed by the hardware manufacturer, it is somewhat on the boundary between hardware and software; thus the name “firmware”.
- Forward Secrecy
- Perfect Forward Secrecy
In cryptography, forward secrecy is a property of key-agreement protocols ensuring that a session key derived from a set of long-term keys cannot be compromised if one of the long-term keys (like the servers private key) is compromised in the future. Usually either Diffie-Hellman Key Exchange or Elliptic Curve Diffie–Hellman are used to create and exchange session keys.
- Fragmentation Threshold
In wireless networks this value is used to set the maximum size of packet a client can send. Smaller packets improve reliability, but they will decrease performance. Unless you’re facing problems with an unreliable network, reducing the fragmentation threshold is not recommended. Make sure it is set to the default settings (usually 2346).
- File Transfer Protocol
- Hash Function
- Hash Functions
A hash function is any function that can be used to map data of arbitrary size onto data of a fixed size. The values returned by a hash function are called hash values, hash codes, digests, or simply hashes. Hash functions are often used in combination with a hash table, a common data structure used in computer software for rapid data lookup. Hash functions accelerate table or database lookup by detecting duplicated records in a large file. One such application is finding similar stretches in DNA sequences. They are also useful in cryptography. A Cryptographic Hash Function allows one to easily verify whether some input data map onto a given hash value, but if the input data is unknown it is deliberately difficult to reconstruct it (or any equivalent alternatives) by knowing the stored hash value. This is used for assuring integrity of transmitted data, and is the building block for HMAC’s, which provide message authentication.
Hash functions are related to (and often confused with) checksums, check digits, fingerprints, lossy compression, randomization functions, error-correcting codes, and ciphers. Although the concepts overlap to some extent, each one has its own uses and requirements and is designed and optimized differently.
In cryptography, an HMAC (sometimes expanded as either keyed-hash message authentication code or hash-based message authentication code) is a specific type of Message Authentication Code (MAC) involving a Cryptographic Hash Function and a secret cryptographic key. It may be used to simultaneously verify both the data integrity and the authentication of a message, as with any MAC. Any cryptographic hash function, such as SHA-256 or SHA-3, may be used in the calculation of an HMAC; the resulting MAC algorithm is termed HMAC-X, where X is the hash function used (e.g. HMAC-SHA256 or HMAC-SHA3). The cryptographic strength of the HMAC depends upon the cryptographic strength of the underlying hash function, the size of its hash output, and the size and quality of the key.
- HTTP Public Key Pinning
HTTP Public Key Pinning (HPKP) is a security mechanism introduced in 2015 with RFC 7469 delivered via an HTTP header which allows HTTPS websites to resist impersonation by attackers using mis-issued or otherwise fraudulent certificates. In order to do so, it delivers a set of public keys to the client (browser), which should be the only ones trusted for connections to this domain. In practice it was newer largely adopted. For website owners and is difficult and risky to maintain. Therefore Google announced in October 2017 to deprecate and later remove the HPKP feature from the Chrome browser.
- HTTP Strict Transport Security
- Hypertext Transfer Protocol
The Hypertext Transfer Protocol (HTTP) is an application layer protocol for distributed, collaborative, hypermedia information systems. HTTP is the foundation of data communication for the World Wide Web, where hypertext documents include hyperlinks to other resources that the user can easily access, for example by a mouse click or by tapping the screen in a web browser.
Development of HTTP was initiated by Tim Berners-Lee at CERN in 1989. Development of early HTTP Requests for Comments (RFCs) was a coordinated effort by the Internet Engineering Task Force (IETF) and the World Wide Web Consortium (W3C), with work later moving to the IETF.
- Hypertext Transfer Protocol Secure
Hypertext Transfer Protocol Secure (HTTPS) is an extension of the Hypertext Transfer Protocol (HTTP). It is used for secure communication over a computer network, and is widely used on the Internet. In HTTPS, the communication protocol is encrypted using Transport Layer Security (TLS) or, formerly, Secure Sockets Layer (SSL). The protocol is therefore also referred to as HTTP over TLS, or HTTP over SSL.
- Internet Assigned Numbers Authority
The Internet Assigned Numbers Authority (IANA) is a function of ICANN, a nonprofit private American corporation that oversees global IP address allocation, autonomous system number allocation, root zone management in the Domain Name System (DNS), media types, and other Internet Protocol-related symbols and Internet numbers. Its website is www.iana.org.
- Internet Corporation for Assigned Names and Numbers
The Internet Corporation for Assigned Names and Numbers (ICANN) is a nonprofit organization responsible for coordinating the maintenance and procedures of several databases related to the namespaces and numerical spaces of the Internet, ensuring the network’s stable and secure operation.
Much of its work has concerned the Internet’s global Domain Name System (DNS), including policy development for internationalization of the DNS system, introduction of new generic top-level domains (TLDs), and the operation of root name servers. Its website is www.icann.org.
- Institute of Electrical and Electronics Engineers
The Institute of Electrical and Electronics Engineers (IEEE) is a professional association with its corporate office in New York City and its operations center in Piscataway, New Jersey. It was formed in 1963 from the amalgamation of the American Institute of Electrical Engineers and the Institute of Radio Engineers. As of 2018, it is the world’s largest association of technical professionals with more than 423,000 members in over 160 countries around the world. Its objectives are the educational and technical advancement of electrical and electronic engineering, telecommunications, computer engineering and allied disciplines.
- Internet Engineering Task Force
The IETF is a large open international community of network designers, operators, vendors, and researchers concerned with the evolution of the Internet architecture and the smooth operation of the Internet. The technical work of the IETF is done in Working Groups, which are organized by topic into several Areas.
These working groups develop and promote the voluntary Internet standards, in particular the standards that comprise the Internet protocol suite (TCP/IP). These are typically published as RFC. It is an open standards organization, with no formal membership or membership requirements. All participants and managers are volunteers, though their work is usually funded by their employers or sponsors.
The IETF started out as an activity supported by the U.S. federal government, but since 1993 it has operated as a standards development function under the auspices of the Internet Society, an international membership-based non-profit organization.
Internet Message Access Protocol (IMAP) is a protocol for email retrieval and storage by the MUA from the MAS. It was devloped as an alternative to POP. IMAP unlike POP, specifically allows multiple clients simultaneously connected to the same mailbox, and through flags stored on the server, different clients accessing the same mailbox at the same or different times can detect state changes made by other clients. The IMAP protocol uses TCP port 143 and TCP port 993 for SSL secured IMAPS connections.
- Intel Active Management Technology
Intel Active Management Technology (AMT) is hardware and firmware backdoor for remote out-of-band management of personal computers, running on the Intel Management Engine, a separate microprocessor not exposed to the user, in order to monitor, maintain, update, upgrade, and repair them.
Features include remote power up/down, boot from remote storage devices, console redirection, remote KVM access and other remote management and security features.
Intel AMT is available on processors advertised under the umbrella marketing term Intel vPro technology tipically targeted at corporate customers since about 2007.
Unlike the Intel Management Engine, AMT usually can be switched off by the computers BIOS options.
- Intel Management Engine
- Manageability Engine
The Intel Management Engine (ME), also known as the Manageability Engine, is an autonomous subsystem that has been incorporated in virtually all of Intel’s processor chipsets since 2008. It is located in the Platform Controller Hub of modern Intel motherboards. It is a part of Intel Active Management Technology, which allows system administrators to perform tasks on the machine remotely. System administrators can use it to turn the computer on and off, and they can login remotely into the computer regardless of whether or not an operating system is installed.
The Intel Management Engine always runs as long as the motherboard is receiving power, even when the computer is turned off.
The ME is an attractive target for hackers, since it has top level access to all devices and completely bypasses the operating system. Intel has not released much information on the Intel Management Engine, prompting speculation that it may include a backdoor. The Electronic Frontier Foundation has voiced concern about IME.
AMD processors have a similar feature, called AMD Secure Technology.
- Internet Relay Chat
Key-signing-key (KSK) is the cryptographic key-pair used in DNSSEC to sign Zone-Signing-Keys (ZSK). The KSK public key is signed by the parent and published as Delegation-Signing (DS) record in the the parent zone. The child zone publishes the public part of the KSK as DNSKEY record in its own Zone.
- Link Aggregation Control Protocol
- Local Delivery Agent
The software program in charge of delivering mail messages to its final destination on the local system, usually a users mailbox, after they receive a message from the MTA.
LFU means “Least Frequently Used”
The Local Mail Transfer Protocol is a derivative of ESMTP, the extension of the Simple Mail Transfer Protocol. It is defined in RFC 2033.
LRU means “Least Recently Used”
Lua (from Portuguese meaning “moon”) is a lightweight, multi-paradigm programming language designed primarily for embedded use in applications. Lua is cross-platform, since the interpreter of compiled bytecode is written in ANSI C, and Lua has a relatively simple C API to embed it into applications.
Lua was originally designed in 1993 as a language for extending software applications to meet the increasing demand for customization at the time. It provided the basic facilities of most procedural programming languages, but more complicated or domain-specific features were not included; rather, it included mechanisms for extending the language, allowing programmers to implement such features. As Lua was intended to be a general embeddable extension language, the designers of Lua focused on improving its speed, portability, extensibility, and ease-of-use in development.
- Message Authentication Code
- MAC Address
- Media Access Control
- Media Access Control Address
A media access control address (MAC address) of a device is a unique identifier assigned to a network interface controller (NIC). For communications within a network segment, it is used as a network address for most IEEE 802 network technologies, including Ethernet, Wi-Fi, and Bluetooth. Within the Open Systems Interconnection (OSI) model, MAC addresses are used in the medium access control protocol sublayer of the data link layer. As typically represented, MAC addresses are recognizable as six groups of two hexadecimal digits, separated by hyphens, colons, or no separator (see Notational conventions below).
A MAC address may be referred to as the burned-in address, and is also known as an Ethernet hardware address, hardware address, and physical address.
A network node with multiple NICs must have a unique MAC address for each. Sophisticated network equipment such as a multilayer switch or router may require one or more permanently assigned MAC addresses.
MAC addresses are most often assigned by the manufacturer of network interface cards. Each is stored in hardware, such as the card’s read-only memory or by a firmware mechanism. A MAC address typically includes the manufacturer’s organizationally unique identifier (OUI).
- Mail Access Server
- Mail Delivery Agent
Another name for LDA or Local Delivery Agent.
Memcached is a general-purpose distributed memory caching system. It is often used to speed up dynamic database-driven websites by caching data and objects in RAM to reduce the number of times an external data source (such as a database or API) must be read. Memcached is free and open-source software, licensed under the Revised BSD license. Memcached runs on Unix-like operating systems and on Microsoft Windows.
Memcached’s APIs provide a very large hash table distributed across multiple machines. When the table is full, subsequent inserts cause older data to be purged in least recently used (LRU) order. Applications using Memcached typically layer requests and additions into RAM before falling back on a slower backing store, such as a database.
Milter (portmanteau for mail filter) is an extension to the widely used open source mail transfer agents (MTA) Sendmail and Postfix. It allows administrators to add mail filters for filtering spam or viruses in the mail-processing chain. In the language of the art, “milter” refers to the protocol and API implementing the service, while “a milter” has come to refer to a filter application that uses milter to provide service.
- Message Submission Agent
The software program in charge of receiving mail messages from the MUA using the Submission protocol. The MSA runs as a Daemon.
- Mail Transfer Agent
- SMTP MTA Strict Transport Security
SMTP Mail Transfer Agent Strict Transport Security (MTA-STS) is a mechanism enabling mail service providers to declare their ability to receive Transport Layer Security (TLS) secure SMTP connections, and to specify whether sending SMTP servers should refuse to deliver to MX hosts that do not offer TLS with a trusted server certificate. MTA-STS is described in RFC 8461.
- Message User Agent
The software program in charge of retrieving messages from a users mailbox on a MAS or Mail Access Server, usually using either IMAP or POP3 protocols. The MUA might also submit mail messages to the MSA or Message Submission Agent using the Submission protocol. MUAs are commonly known as mail clients. Known MUA software product examples are Microsoft Outlook or Mozilla Thunderbird.
DNS record for “Mail Exchanger”, informing the sending system, which hosts are responsible to receive mails for a domain over SMTP.
- National Institute of Standards and Technology
The National Institute of Standards and Technology (NIST) is a measurement standards laboratory, and a non-regulatory agency of the United States Department of Commerce. Its mission is to promote innovation and industrial competitiveness. In 2013 the newspapers Guardian and New York Times reported that NIST allowed the National Security Agency (NSA) to insert a cryptographically secure pseudorandom number generator called Dual EC DRBG into NIST standard SP 800-90 that had a kleptographic backdoor that the NSA can use to covertly predict the future outputs of this pseudorandom number generator thereby allowing the surreptitious decryption of data.
- NIST P curves
- NIST P-224
- NIST P-256
- NIST P-384
According to Bernstein and Lange, many of the efficiency-related decisions in NIST FIPS 186-2 are sub-optimal. Other curves are more secure and run just as fast
In 2014 Daniel J. Bernstein and Tanja Lange claimed that that most real-world implementations of Elliptic-Curve Cryptography are not to be considered safe. Amongst many others they also criticize the NIST curves. Use if no better alternatives available like Curve25519.
- National Security Agency
- Network Time Protocol
Network Time Protocol (NTP) is a networking protocol for clock synchronization between computer systems over packet-switched, variable-latency data networks. In operation since before 1985, NTP is one of the oldest Internet protocols in current use.
NTP is intended to synchronize all participating computers to within a few milliseconds of Coordinated Universal Time (UTC). It is designed to mitigate the effects of variable network latency. NTP can usually maintain time to within tens of milliseconds over the public Internet, and can achieve better than one millisecond accuracy in local area networks under ideal conditions. Asymmetric routes and network congestion can cause errors of 100 ms or more.
- Null Modem
Null modem is a communication method to directly connect two DTEs (computer, terminal, printer, etc.) using an RS-232 serial cable. The name stems from the historical use of RS-232 cables to connect two teleprinter devices or two modems in order to communicate with one another; null modem communication refers to using a crossed-over RS-232 cable to connect the teleprinters directly to one another without the modems. It is also used to serially connect a computer to a printer, since both are DTE, and is known as a Printer Cable.
- Open Publication Distribution System
The Open Publication Distribution System (OPDS) is a way for electronic book reading devices to access catalogs of books and download books itself from a web server. Its specification is prepared by an informal grouping of partners, combining Internet Archive, O’Reilly Media, Feedbooks, OLPC, and others.
- Power Distribution Unit
A power distribution unit (PDU) or mains distribution unit (MDU) is a device fitted with multiple (outputs designed to distribute electric power, especially to racks of (computers and networking equipment located within a data center. Data (centers face challenges in power protection and management solutions. (This is why many data centers rely on PDU monitoring to improve (efficiency, uptime, and growth.
Privacy Enhanced Mail (PEM) is a 1993 IETF proposal for securing email using public-key cryptography. Although PEM became an IETF proposed standard it was never widely deployed or used.
- PEM Encoded
- PEM File Format
Base64 encoded binary data, often used to store X.509 certificates and keys usually enclosed between “—–BEGIN CERTIFICATE—–” and “—–END CERTIFICATE—–” strings.
- Public-Key Cryptography Standards
PKCS stands for “Public Key Cryptography Standards”. These are a group of public-key cryptography standards devised and published by RSA Security LLC, starting in the early 1990s. The company published the standards to promote the use of the cryptography techniques to which they had patents, such as the RSA algorithm, the Schnorr signature algorithm and several others. Though not industry standards (because the company retained control over them), some of the standards in recent years[when?] have begun to move into the “standards-track” processes of relevant standards organizations such as the IETF and the PKIX working-group.
- PKCS #1
- RSA Cryptography Standard
See RFC 8017. Defines the mathematical properties and format of RSA public and private keys (ASN.1-encoded in clear-text), and the basic algorithms and encoding/padding schemes for performing RSA encryption, decryption, and producing and verifying signatures.
- PKCS #11
- Cryptographic Token Interface
Also known as “Cryptoki”. An API defining a generic interface to cryptographic tokens (see also hardware security module). Often used in single sign-on, public-key cryptography and disk encryption systems. RSA Security has turned over further development of the PKCS #11 standard to the OASIS PKCS 11 Technical Committee. See also PKCS.
- PKCS #15
- Cryptographic Token Information Format Standard
Defines a standard allowing users of cryptographic tokens to identify themselves to applications, independent of the application’s Cryptoki implementation (PKCS #11) or other API. RSA has relinquished IC-card-related parts of this standard to ISO/IEC 7816-15. See also PKCS.
The Post Office Protocol (POP) is an Internet protocol used by mail clients to retrieve mail from remote servers over a TCP/IP connection. POP has been developed through several versions, with version 3 (POP3) being the current standard.
- Quality of Service
- Rainbow Table
RC4 is the most widely used software stream cipher and is used in popular protocols such as Transport Layer Security (TLS) and WEP (to secure wireless networks). While remarkable for its simplicity and speed in software, RC4 has weaknesses that argue against its use in new systems. As of 2013, there is speculation that some state cryptologic agencies may possess the capability to break RC4 even when used in the TLS protocol. RC4 should disabled and avoided wherever possible!
- Regular Expression
A regular expression, regex or regexp is a sequence of characters that define a search pattern. Usually such patterns are used by string searching algorithms for “find” or “find and replace” operations on strings, or for input validation. It is a technique developed in theoretical computer science and formal language theory.
- Request for Comments
A Request for Comments (RFC) is a publication of the Internet Engineering Task Force (IETF) and the Internet Society, the principal technical development and standards-setting bodies for the Internet.
- Read-Only Memory
Read-only memory (ROM) is a class of storage medium used in computers and other electronic devices. Data stored in ROM can only be modified slowly, with difficulty, or not at all, so it is mainly used to distribute Firmware.
In telecommunications, RS-232, Recommended Standard 232 refers to a standard originally introduced in 1960 for serial communication transmission of data. It formally defines signals connecting between a DTE (data terminal equipment) such as a computer terminal, and a DCE (data circuit-terminating equipment or data communication equipment), such as a modem. The standard defines the electrical characteristics and timing of signals, the meaning of signals, and the physical size and pinout of connectors.
See also Serial Port.
RSA is one of the first practicable public-key cryptosystems and is widely used for secure data transmission. In such a cryptosystem, the encryption key is public and differs from the decryption key which is kept secret. RSA stands for Ron Rivest, Adi Shamir and Leonard Adleman, who first publicly described the algorithm in 1977. Youtube has this video that explains it in 16 minutes.
- RTS/CTS Threshold
RTS (Request to send) and CTS (Clear to Send) is the optional mechanism used by the 802.11 wireless networking protocol to reduce frame collisions introduced by the “hidden node problem”. Originally the protocol fixed the “exposed node problem” as well, but modern RTS/CTS includes ACKs and does not solve the exposed node problem.
RTS (Request to send) is send by the client to the access point – it essentially asks for permission to send the next data packet. The lower the threshold, the more stable your Wi-Fi network, since it essentially asks more often when sending packages. However, if you don’t have problems with your Wi-Fi you should make sure that the RTS Threshold is set to the maximum allowed.
In cryptography, a salt is random data that is used as an additional input to a Cryptographic Hash Function on a password or passphrase. The primary function of salts is to defend against dictionary attacks versus a list of password hashes and against pre- computed Rainbow Table attacks. A new salt is randomly generated for each password. In a typical setting, the salt and the password are concatenated and processed with a Cryptographic Hash Function, and the resulting output (but not the original password) is stored with the salt in a database. Hashing allows for later authentication while defending against compromise of the plaintext password in the event that the database is somehow compromised. Cryptographic salts are broadly used in many modern computer systems, from Unix system credentials to Internet security.
- Serial Port
- COM Port
In computing, a serial port is a serial communication interface through which information transfers in or out one bit at a time (in contrast to a parallel port).Throughout most of the history of personal computers, data was transferred through serial ports to devices such as modems, terminals, and various peripherals.
While such interfaces as Ethernet, FireWire, and USB all send data as a serial stream, the term serial port usually identifies hardware compliant to the RS-232 standard or similar and intended to interface with a modem or with a similar communication device.
Modern computers without serial ports may require USB-to-serial converters to allow compatibility with RS-232 serial devices. Serial ports are still used in applications such as industrial automation systems, scientific instruments, point of sale systems and some industrial and consumer products. Server computers may use a serial port as a control console for diagnostics. Network equipment (such as routers and switches) often use serial console for configuration. Serial ports are still used in these areas as they are simple, cheap and their console functions are highly standardized and widespread. A serial port requires very little supporting software from the host system.
On personal computers they are called COM ports and numerated like COM1, COM2 etc.
SHA-1 is a Cryptographic Hash Function designed by the NSA and is a U.S. Governement Standard published by the United States NIST in 1995. SHA stands for “secure hash algorithm”. In 2005, analysts found attacks on SHA-1 suggesting that the algorithm might not be secure enough for ongoing use. The U.S, the German and other governements are required to move to SHA-2 after 2010 because of the weakness. Windows will stop accepting SHA-1 certificates by 2017. Hoever a large part of todays commercial certificate authorities still only issue SHA-1 signed certificates. Avoid where possible!
SHA-3 (Secure Hash Algorithm 3) is the latest member of the Secure Hash Algorithm family of standards, released by NIST on August 5, 2015. Although part of the same series of standards, SHA-3 is internally different from the MD5-like structure of SHA-1 and SHA-2.
SHA-3 is a subset of the broader cryptographic primitive family Keccak designed by Guido Bertoni, Joan Daemen, Michaël Peeters, and Gilles Van Assche, building upon RadioGatún. Keccak’s authors have proposed additional uses for the function, not (yet) standardized by NIST, including a stream cipher, an authenticated encryption system, a “tree” hashing scheme for faster hashing on certain architectures, and AEAD ciphers Keyak and Ketje.
Keccak is based on a novel approach called sponge construction. Sponge construction is based on a wide random function or random permutation, and allows inputting (“absorbing” in sponge terminology) any amount of data, and outputting (“squeezing”) any amount of data, while acting as a pseudorandom function with regard to all previous inputs. This leads to great flexibility.
NIST does not currently plan to withdraw SHA-2 or remove it from the revised Secure Hash Standard. The purpose of SHA-3 is that it can be directly substituted for SHA-2 in current applications if necessary, and to significantly improve the robustness of NIST’s overall hash algorithm toolkit.
SHA-2 is Cryptographic Hash Function, published in 2001 by the US government (NSA & NIST), is significantly different from SHA-1. SHA-2 currently consists of a set of six Hash Functions with digests that are 224, 256, 384 or 512 bits.
- Short Preamble
- Long Preamble
Preamble Type is an easy router option that can boost the performance of your wireless wifi network slightly. Most of the routers or firmware has the default setting for the Preamble Type as long.
Preamble Type setting means that it adds some additional data header strings to help check the wifi data transmission errors. Short Preamble Type uses shorter data strings that adds less data to transmit the error redundancy check which means that it is much faster. Long Preamble Type uses longer data strings which allow for better error checking capability.
Sieve is a programming language that can be used to create filters for email. Sieve’s base specification is outlined in RFC 5228.
The Simple Mail Transfer Protocol (SMTP) is the protool used by a MTA to transmit mails between Internet domains. First defined by RFC 821 in 1982, it was last updated in 2008 as ESMTP. SMTP by default uses TCP port 25. SMTP connections secured by SSL, known as SMTPS, default to TCP port 465.
Simple Mail Transfer Protocol Secure was a way to provide SSL secured SMTP connections on TCP port 465. SMTPS has been revoked in favor of Submission in 1998 and today TCP port 465 is reserved for other purposes. Nonetheless many mail service providers still provide this service on port 465 today.
- Sender Policy Framework
Sender Policy Framework (SPF) is an email authentication method designed to detect forging sender addresses during the delivery of the email. SPF alone though is limited only to detect a forged sender claimed in the envelope of the mail which is used when the mail gets bounced. Only in combination with DMARC it can be used to detect forging of the visible sender in emails (email spoofing), a technique often used in phishing and email spam.
SPF allows the receiving mail server to check during mail delivery that a mail claiming to come from a specific domain is submitted by an IP address authorized by that domain’s administrators. The list of authorized sending hosts and IP addresses for a domain is published in the DNS records for that domain.
Sender Policy Framework is defined in RFC 7208 dated April 2014 as a “proposed standard”.
Secure Shell (SSH) is a cryptographic network protocol for operating network services securely over an unsecured network. Typical applications include remote command-line, login, and remote command execution, but any network service can be secured with SSH.
- Service Set Identifier
In IEEE 802.11 wireless local area networking standards (including Wi-Fi), a service set is a group of wireless network devices that are operating with the same networking parameters.
The SSID or “Service Set Identifier” is a unique ID of up to 32 characters that is used for naming wireless networks. When multiple wireless networks overlap in a certain location, SSIDs make sure that data gets sent to the correct destination.
Each packet sent over a wireless network includes the SSID, which ensures that the data being sent over the air arrives at the correct location.
See also BSSID.
- Secure Sockets Layer
Secure Sockets Layer is the predecessor of Transport Layer Security (TLS).
- Opportunistic TLS
Opportunistic TLS (Transport Layer Security) refers to extensions in plain text communication protocols, which offer a way to upgrade a plain text connection to an encrypted (TLS or SSL) connection instead of using a separate port for encrypted communication. Several protocols use a command named “STARTTLS” for this purpose. It is primarily intended as a countermeasure to passive monitoring. The STARTTLS command for IMAP and POP3 is defined in RFC 2595, for SMTP in RFC 3207, for XMPP in RFC 6120 and for NNTP in RFC 4642. For IRC, the IRCv3 Working Group has defined the STARTTLS extension. FTP uses the command “AUTH TLS” defined in RFC 4217 and LDAP defines a protocol extension OID in RFC 2830. HTTP uses upgrade header.
- Stock ROM
Original Firmware of a device as supplied by the manufacturer on a programmable ROM. The term is mostly used in the context where a third party provides alternative Firmware which may enhance or otherwise change the functionality of a device, beyond the intentions of its original manufacturer.
Message Submission for Mail is a protocol defined in RFC 6409 and used by mail clients (MSA, MUA) to submit electronic mail for further delivery on the internet. It is essentially SMTP, but with mandatory TLS-encrpytion and user authentication added and running on TCP port 587.
- Temporal Key Integrity Protocol
Temporal Key Integrity Protocol <https://en.wikipedia.org/wiki/Temporal_Key_Integrity_Protocol> is a security protocol used in the IEEE 802.11 wireless networking standard. TKIP was designed by the IEEE 802.11i task group and the Wi-Fi Alliance as an interim solution to replace WEP without requiring the replacement of legacy hardware. This was necessary because the breaking of WEP had left Wi-Fi networks without viable link-layer security, and a solution was required for already deployed hardware. However, TKIP itself is no longer considered secure, and was deprecated in the 2012 revision of the 802.11 standard.
“Too Long; Didn’t Read”.
- Transport Layer Security
Transport Layer Security (TLS) and its predecessor, Secure Sockets Layer (SSL), are cryptographic protocols designed to provide communication security over the Internet. They use X.509 certificates and hence asymmetric cryptography to authenticate the counterparty with whom they are communicating, and to exchange a symmetric key. This session key is then used to encrypt data flowing between the parties. This allows for data/message confidentiality, and message authentication codes for message integrity and as a by-product, message authentication.
A TLSA DNS record publishes information on certificates used by a TLS secured server. Clients (e.g webbrowsers) can verify the TLS certificate of a server by checking the TLSA DNS record instead of or additionally to check if the certificates is singned by a trusted certificate authority. TLSA is part of the DANE specfication. Domains publishing TLSA records must be secured by DNSSEC.
Trust on first use (TOFU), or trust upon first use (TUFU), is a security model used by client software which needs to establish a trust relationship with an unknown or not-yet-trusted endpoint. In a TOFU model, the client will try to look up the identifier, usually some kind of public key, in its local trust database. If no identifier exists yet for the endpoint, the client software will either prompt the user to determine if the client should trust the identifier or it will simply trust the identifier which was given and record the trust relationship into its trust database. If a different identifier is received in subsequent connections to the endpoint the client software will consider it to be untrusted.
The TOFU approach can be used when connecting to arbitrary or unknown endpoints which do not have a trusted third party such as a certificate authority. For example, the SSH protocol is designed to issue a prompt the first time the client connects to an unknown or not-yet-trusted endpoint. Other implementations of TOFU can be found in HTTP Public Key Pinning in which browsers will always accept the first public key returned by the server and with HTTP Strict Transport Security in which browsers will obey the redirection rule for the duration of ‘age’ directive.
In cryptography, Twofish is a symmetric key block cipher with a block size of 128 bits and key sizes up to 256 bits. It was one of the five finalists of the Advanced Encryption Standard contest, but it was not selected for standardization. Twofish is related to the earlier block cipher Blowfish.
- Voice over IP
- Voice over Wireless LAN
- Wired Equivalent Privacy
Wired Equivalent Privacy (WEP) is a security algorithm for IEEE 802.11 wireless networks. Introduced as part of the original 802.11 standard ratified in 1997, its intention was to provide data confidentiality comparable to that of a traditional wired network.
WEP, recognizable by its key of 10 or 26 hexadecimal digits (40 or 104 bits), was at one time widely in use and was often the first security choice presented to users by router configuration tools.
In 2003 the Wi-Fi Alliance announced that WEP had been superseded by Wi-Fi Protected Access (WPA). In 2004, with the ratification of the full 802.11i standard (i.e. WPA2), the IEEE declared that both WEP-40 and WEP-104 have been deprecated.
WEP was the only encryption protocol available to 802.11a and 802.11b devices built before the WPA standard, which was available for 802.11g devices. However, some 802.11b devices were later provided with firmware or software updates to enable WPA, and newer devices had it built in.
- Wi-Fi Multimedia
- Wireless Multimedia Extensions
Wireless Multimedia Extensions (WME), also known as Wi-Fi Multimedia (WMM), is a Wi-Fi Alliance interoperability certification, based on the IEEE 802.11e standard. It provides basic Quality of Service (QoS) features to IEEE 802.11 networks. WMM prioritizes traffic according to four Access Categories (AC): voice (AC_VO), video (AC_VI), best effort (AC_BE), and background (AC_BK). However, it does not provide guaranteed throughput. It is suitable for well-defined applications that require QoS, such as Voice over IP (VoIP) on Wi-Fi phones (VoWLAN).
WMM is mandatory for 802.11n. If you disable WMM you also disable 802.11n and your wirelless network will automatically fall back to 802.11g
- WLAN Channel
- Wireless LAN Channel
- Wireless Local Area Network Channel
Wireless local area network channels using IEEE 802.11 protocols are sold mostly under the trademark WiFi.
The 802.11 workgroup has documented use in five distinct frequency ranges: 2.4 GHz, 3.6 GHz, 4.9 GHz, 5 GHz, and 5.9 GHz bands. Each range is divided into a multitude of channels. Countries apply their own regulations to the allowable channels, allowed users and maximum power levels within these frequency ranges.
A List of WLAN Channels is available at Wikipedia.
- Wi-Fi Protected Access
- IEEE 802.11i
Wi-Fi Protected Access is a security certification programs developed by the Wi-Fi Alliance to secure wireless computer networks. The Alliance defined these in response to serious weaknesses researchers had found in the previous system, Wired Equivalent Privacy (WEP).
WPA (sometimes referred to as the draft IEEE 802.11i standard) became available in 2003. The Wi-Fi Alliance intended it as an intermediate measure in anticipation of the availability of the more secure and complex WPA2, which became available in 2004 and is a common shorthand for the full IEEE 802.11i (or IEEE 802.11i-2004) standard.
In January 2018, Wi-Fi Alliance announced the release of WPA3 with several security improvements over WPA2.
- Wi-Fi Protected Access II
- IEEE 802.11i-2004
IEEE 802.11i-2004, or 802.11i for short, is an amendment to the original IEEE 802.11, implemented as Wi-Fi Protected Access II (WPA2). The draft standard was ratified on 24 June 2004. This standard specifies security mechanisms for wireless networks, replacing the short Authentication and privacy clause of the original standard with a detailed Security clause. In the process, the amendment deprecated broken Wired Equivalent Privacy (WEP), while it was later incorporated into the published IEEE 802.11-2007 standard.
A WPA2 wireless connection using the a pre-shared key (aka a password) to carry out the initial authentication process.
In January 2018, the Wi-Fi Alliance announced WPA3 as a replacement to WPA2. The new standard uses 128-bit encryption in WPA3-Personal mode (192-bit in WPA3-Enterprise) and Forward Secrecy. The WPA3 standard also replaces the Pre-Shared Key exchange with Simultaneous Authentication of Equals as defined in IEEE 802.11-2016 resulting in a more secure initial key exchange in personal mode. The Wi-Fi Alliance also claims that WPA3 will mitigate security issues posed by weak passwords and simplify the process of setting up devices with no display interface.
- Wi-Fi Protected Setup
Originally called, Wi-Fi Simple Config, WiFi Protected Setup is a network security standard to create a secure wireless home network.
Created by the Wi-Fi Alliance and introduced in 2006, the goal of the protocol is to allow home users who know little of wireless security and may be intimidated by the available security options to set up Wi- Fi Protected Access, as well as making it easy to add new devices to an existing network without entering long passphrases. Prior to the standard, several competing solutions were developed by different vendors to address the same need.
A major security flaw was revealed in December 2011 that affects wireless routers with the WPS PIN feature, which most recent models have enabled by default. The flaw allows a remote attacker to recover the WPS PIN in a few hours with a brute-force attack and, with the WPS PIN, the network’s WPA/WPA2 pre-shared key. Users have been urged to turn off the WPS PIN feature.
In cryptography, X.509 is an ITU-T standard for a public key infrastructure (PKI) and Privilege Management Infrastructure (PMI). X.509 specifies, amongst other things, standard formats for public key certificates, certificate revocation lists, attribute certificates, and a certification path validation algorithm.
Extensible Messaging and Presence Protocol (XMPP) is a communications protocol for message-oriented middleware based on XML (Extensible Markup Language). The protocol was originally named Jabber and was developed by the Jabber open-source community in 1999 for near real-time, instant messaging (IM), presence information, and contact list maintenance.