<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://cadebray.com/feed.xml" rel="self" type="application/atom+xml" /><link href="https://cadebray.com/" rel="alternate" type="text/html" /><updated>2026-04-09T15:35:46+00:00</updated><id>https://cadebray.com/feed.xml</id><title type="html">Cade Bray</title><author><name>Cade Bray</name><email>Bray.cade@gmail.com</email></author><entry><title type="html">Consider the motive for the attack</title><link href="https://cadebray.com/2025/06/23/consider-the-motive-for-the-attack.html" rel="alternate" type="text/html" title="Consider the motive for the attack" /><published>2025-06-23T00:00:00+00:00</published><updated>2025-06-23T00:00:00+00:00</updated><id>https://cadebray.com/2025/06/23/consider-the-motive-for-the-attack</id><content type="html" xml:base="https://cadebray.com/2025/06/23/consider-the-motive-for-the-attack.html"><![CDATA[<p>Motives are key aspects to a malicious actor’s agenda. If you can understand why they’re targeting, you then you may understand what is vulnerable. State actors for instance may target others to gather information for their own sake of having the data. <!--more--> Singular malicious actors may have a secret agenda that motivates them with monetary gains. How these are handled is different. With the state actor example, there isn’t much to be done but monitor for the information resurfacing and preventing the next potential incursion. The malicious actor on the other hand would include identification, monitoring for the data to resurface, confirming how much of the data has resurfaced, and what communities the data was released to for your users’ threat surface. Once that sensitive data changes hands it enters a new set of motives. Some may use SPII for social engineering, aggregate the data with other sources and resale, or even black mail. I plan to implement searching for the motive amongst my best practices by taking the time to place myself in their shoes.</p>

<p style="text-align: center;">
  <img src="/assets/attack_motive.png" alt="Attack Motive" width="400" />
</p>

<p>I would explain this to a new developer on my team by first doing some role playing. First placing myself in the shoes of a malicious actor and setting the stage of what happened. Once I explain the scope of the issue, I’d ask him to tell me what I might have done or had access too. I’d explain some points of view they might have missed by reversing the roles in the exercise. Doing these war games can help build preparedness. Ultimately this is not a one time conversation that explains the depth of what security needs to be done but a continuous exercise that even the new developers should get used to partaking in.</p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>- Federal government leads the way with encryption standards. Insights. Retrieved August 4, 2024, from https://insights.samsung.com/2022/01/12/federal-government-leads-the-way-with-encryption-standards/
</code></pre></div></div>]]></content><author><name>Cade Bray</name><email>Bray.cade@gmail.com</email></author><summary type="html"><![CDATA[Motives are key aspects to a malicious actor’s agenda. If you can understand why they’re targeting, you then you may understand what is vulnerable. State actors for instance may target others to gather information for their own sake of having the data. Singular malicious actors may have a secret agenda that motivates them with monetary gains. How these are handled is different. With the state actor example, there isn’t much to be done but monitor for the information resurfacing and preventing the next potential incursion. The malicious actor on the other hand would include identification, monitoring for the data to resurface, confirming how much of the data has resurfaced, and what communities the data was released to for your users’ threat surface. Once that sensitive data changes hands it enters a new set of motives. Some may use SPII for social engineering, aggregate the data with other sources and resale, or even black mail. I plan to implement searching for the motive amongst my best practices by taking the time to place myself in their shoes.]]></summary></entry><entry><title type="html">Don’t Leave Security to the End</title><link href="https://cadebray.com/2025/06/22/dont-leave-security-to-the-end.html" rel="alternate" type="text/html" title="Don’t Leave Security to the End" /><published>2025-06-22T00:00:00+00:00</published><updated>2025-06-22T00:00:00+00:00</updated><id>https://cadebray.com/2025/06/22/dont-leave-security-to-the-end</id><content type="html" xml:base="https://cadebray.com/2025/06/22/dont-leave-security-to-the-end.html"><![CDATA[<p>In this blog post, I’ll elaborate on the statement “Don’t leave security to the end” and what that means in terms of best practices. Cybersecurity is a constantly changing field that requires critical thinking in a multitude of areas. We need specialists who know applications, databases, networks, and even physical security to name a few. The attack surface is growing as technology expands, and cybersecurity professionals need to be constantly learning. Even if a company hired security specialists for all these unique areas we would likely never be able to complete a project before the funding runs out. <!--more--> For this reason, we need to engage all members of a project or the staff of a company to participate in being a security specialist.</p>

<p style="text-align: center;">
  <img src="/assets/dont-leave-security-to-the-end.png" alt="Don't Leave Security to the End visual" width="400" />
</p>

<p>This seems great in theory as we’re able to cut back on payroll costs because we’re able to have developers, database specialists, and IT specialists working diligently to mitigate threats. It poses an important question, when do you combat these threats? The answer is constantly which could be intimidating to any one of these roles listed because it’s not necessarily what they signed up for, being a full-time security specialist. That’s where the security mindset becomes key to cultivating with your staff. A developer can still be a full-time developer, but instead of just running through a checklist at the end of their project for security purposes they take a more critical approach to their work.</p>

<p>Every piece of code a developer writes needs a moment to consider time complexity, space complexity, and even readability. What we’re cultivating is an additional layer of thinking that I’ll coin as “security complexity”. This means analyzing the threat surface of the code that was written, possible solutions to reduce that surface, and abstracting your role for a moment to think like a malicious actor who got their hand on the source code that they need to formulate an exploit for.</p>

<p>Some easy steps that a developer specifically can take to analyze their security complexity is to build unit tests that cover basic and well-determined exploits outlined by major organizations, such as OWASP. Ensure that these unit tests have ample code coverage because the test is useless if there are sections left unchecked. Use commonly used libraries that have major time invested in their development because recreating the wheel isn’t just tedious but leaves you vulnerable to attacks that someone has already had an unfortunate encounter with. Using dependency checkers that can identify vulnerabilities is critical for identifying your attack surface.</p>

<p>I touch on the validation of input as an example here. Input from all sources is typically expected to some degree and shouldn’t be trusted to conform to that expectation. As such, we can deny any form of data we’re expecting to use that doesn’t conform to our expectation. If complete denial of data isn’t an option for your application because of dynamic needs, it ultimately boils down to understanding your attack surface. If you understand that the given input for your web application is going to be passed into a MySQL database for instance, we can implement safe practices to sanitize the data from key SQL words to prevent attacks. Proactive solutions like this allow you to mitigate your attack surface.</p>]]></content><author><name>Cade Bray</name><email>Bray.cade@gmail.com</email></author><summary type="html"><![CDATA[In this blog post, I’ll elaborate on the statement “Don’t leave security to the end” and what that means in terms of best practices. Cybersecurity is a constantly changing field that requires critical thinking in a multitude of areas. We need specialists who know applications, databases, networks, and even physical security to name a few. The attack surface is growing as technology expands, and cybersecurity professionals need to be constantly learning. Even if a company hired security specialists for all these unique areas we would likely never be able to complete a project before the funding runs out. For this reason, we need to engage all members of a project or the staff of a company to participate in being a security specialist.]]></summary></entry><entry><title type="html">AAA and Defense in Depth</title><link href="https://cadebray.com/2025/06/14/aaa-and-defense-in-depth.html" rel="alternate" type="text/html" title="AAA and Defense in Depth" /><published>2025-06-14T00:00:00+00:00</published><updated>2025-06-14T00:00:00+00:00</updated><id>https://cadebray.com/2025/06/14/aaa-and-defense-in-depth</id><content type="html" xml:base="https://cadebray.com/2025/06/14/aaa-and-defense-in-depth.html"><![CDATA[<p>In this case study I’ll be taking a closer look at the LinkedIn data breach of June 2021. This case made the news because it was originally found on ‘RaidForums’ as a bulk selling lot for 700 million users (Mathews, 2021) or 92% of all users’ public data in a consolidated format only two months after a similar occurrence. A similar occurrence had a slightly smaller breach with 500 million users’ data becoming vulnerable (LinkedIn Update on 500 million, 2021). ‘RaidForums’ is a well-known data marketplace on the dark web where the user ‘Tomliner’ (Gibson, Townes, Lewis, &amp; Bhunia, 2021) added onto the 500 million previously leaked data with an additional 200 million (LinkedIn Update on 700 million, 2021).</p>

<p style="text-align: center;">
  <img src="/assets/DID.jpg" alt="Defense in Depth visual" width="400" />
</p>

<p>The scope of the data included users’ full names, email addresses, phone numbers, and physical addresses (Gibson, Townes, Lewis, &amp; Bhunia, 2021). Since the data was not accessed from a protected system within LinkedIn but instead a public application programming interface (API) that exposed too much, we can assert that this incident was purely a data breach without a security breach (Norton, 2019). LinkedIn attempted to save face with their consumers by claiming this was not a data breach because the data was not sensitive personally identifiable information (SPII) but instead just personally identifiable information (PII) (Gibson, Townes, Lewis, &amp; Bhunia, 2021). This assertion by LinkedIn did little to calm their consumers as these incidents were becoming more frequent.</p>

<p>LinkedIn was a target for multiple reasons. The first reason that a malicious acter may have singled them out was their robust public API. Their API allowed an attacker to gather user information in a conformed and easily accessible manner. While having an API for a data set can enhance a business model such as adding additional revenue streams it can add a significant room to your company’s attack surface. Another reason LinkedIn may have been targeted is for the desirability of their dataset. Tomliner added to this data by using other APIs such as Facebook. This aggregate made for a rich source of data worth $5000 per copy. This consolidated data could be valuable to other malicious actors, businesses as marketing leads, or to businesses for recruiting leads. The most unfortunate part of this incident and other data breaches like it is that once the data is public there is not much a company can do to ‘rollback’ its effects. This permanently damages the reputation of that company with its user base.</p>

<p>LinkedIn can implement the AAA security measures which include Authentication, Authorization, and Accounting. LinkedIn can enforce public API requests with a bearer token or some other form of authentication which enables LinkedIn to identify who is asking for information. Authorization can be implemented to ensure that only the data the user of the API is paying for is sent, which reduces the attack surface of the company. Finally, Accounting helps create a chain of custody of the data so they can enforce their terms and conditions on violating offenders with logs and verifiable proof.</p>

<p>Immediate threats for LinkedIn would include ongoing access for malicious actors though their public API. As we’ve seen that their robust API is so enticing that malicious actors cannot resist it would indicate that the risk to reward ratio is favoring risk and no longer cost-effective to provide given the damage it causes. Some mitigation tactics that the LinkedIn team could deploy is to break their API into multiple paid accessible APIs and increase the overall price to query it. Finding the financial breakeven point for a bad actors’ motives can help discourage them from even attempting their efforts while preserving the service for more legitimate users. AAA security measures are one form of enforcement and need to be layers with others such as blocking high risk regions, blocking known VPN servers, and regular monitoring of the data accessed which may lead to audits for API users that are suspiciously overconsuming. By implementing multiple layers of security, you can create what is known as Defense-in-Depth (DiD).</p>

<p><strong>References</strong></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>- Gibson, B., Townes, S., Lewis, D., &amp; Bhunia, S. (2021, December 15-17). Vulnerability in Massive API Scraping: 2021 LinkedIn Data Breach. IEEE. Retrieved from https://ieeexplore.ieee.org/document/9799221
- LinkedIn. (2021, April 8). An update on report of scraped data (500 Million). Retrieved from LinkedIn Pressroom: https://news.linkedin.com/2021/april/an-update-from-linkedin
- LinkedIn. (2021, June 29). An update on report of scraped data (700 Million). Retrieved from LinkedIn Pressroom: https://news.linkedin.com/2021/june/an-update-from-linkedin
- Mathews, L. (2021, June 29). Details On 700 Million LinkedIn Users For Sale On Notorious Hacking Forum. Retrieved from Forbes: https://www.forbes.com/sites/leemathews/2021/06/29/details-on-700-million-linkedin-users-for-sale-on-notorious-hacking-forum/
- Norton. (2019, September 5). What is a security breach? Retrieved from Norton: https://us.norton.com/blog/privacy/security-breach
</code></pre></div></div>]]></content><author><name>Cade Bray</name><email>Bray.cade@gmail.com</email></author><summary type="html"><![CDATA[In this case study I’ll be taking a closer look at the LinkedIn data breach of June 2021. This case made the news because it was originally found on ‘RaidForums’ as a bulk selling lot for 700 million users (Mathews, 2021) or 92% of all users’ public data in a consolidated format only two months after a similar occurrence. A similar occurrence had a slightly smaller breach with 500 million users’ data becoming vulnerable (LinkedIn Update on 500 million, 2021). ‘RaidForums’ is a well-known data marketplace on the dark web where the user ‘Tomliner’ (Gibson, Townes, Lewis, &amp; Bhunia, 2021) added onto the 500 million previously leaked data with an additional 200 million (LinkedIn Update on 700 million, 2021).]]></summary></entry><entry><title type="html">Cartpole Problem Explained</title><link href="https://cadebray.com/2025/02/23/Cartpole-Problem-Explained.html" rel="alternate" type="text/html" title="Cartpole Problem Explained" /><published>2025-02-23T00:00:00+00:00</published><updated>2025-02-23T00:00:00+00:00</updated><id>https://cadebray.com/2025/02/23/Cartpole-Problem-Explained</id><content type="html" xml:base="https://cadebray.com/2025/02/23/Cartpole-Problem-Explained.html"><![CDATA[<p>The cartpole problem is a notorious reinforcement learning objective where the goal is to balance a pole on a cart that has two directional movements. These movements combined with the velocity of the cart can change the angle of the pole in relation to the cart. <!--more--> The objective is just to balance the pole on the cart given the two controls, moving the cart left and right. At each stage of a given episode the state of the cart, pole angles, and cart velocity is collected and processed in some form to determine the highest possible reward or lowest penalty. How the next action is calculated is through various algorithms that might include two common ones known as Reinforce and A2C for an actor.</p>

<p style="text-align: center;">
  <img src="/assets/cartpole.png" alt="Cartpole" width="400" />
</p>

<p>The Reinforce algorithm is a policy-based algorithm. Being considered a policy-based algorithm means that an actor will directly optimize the policy function instead of using what is referred to as a value function (Beysolow II, 2019, p. 20). The Reinforce algorithm thrives at converging on solutions better than a value-based algorithm such as Q-Learning (Beysolow II, 2019, p. 21). Unfortunately, the Reinforce algorithm has difficulties with high variability in log probabilities and cumulative reward values which creates a noisy gradient. This noisy gradient creates a less-than-optimal learning situation (Yoon, 2019). The Reinforce algorithm will collect sequences of states, actions, and rewards before updating the policy. Monte Carlo sampling is a technique used in this algorithm where samples are chosen randomly to assist in approximations and find potentially unlearned rewards (Yoon, 2019). This algorithm also enables an agent to sample actions based on the probability distribution found in the policy distribution.</p>

<p>To summarize, during training the actor that is utilizing a reinforce algorithm to solve the cartpole problem will iterate through episodes exploring and collecting information on potential rewards. It will balance random decisions and exploiting its environment. This helps approximate and progress through the episode while gathering the optimal rewards. The algorithm uses rewards to adjust the weights of the actor to help localize its probability of the highest reward in a distribution (Beysolow II, 2019, p. 21).</p>

<p>The Q-Learning algorithm on the other hand is a value-based algorithm. Being considered a value-based algorithm means that an actor collects information on its surroundings the same as a Reinforce algorithm but uses a different exploration method called ‘Epsilon-Greedy Algorithm.’ In this scenario, Epsilon is initialized with a value between zero and one that acts as a percentage of how often you want it to explore. To determine if a Q-Learning algorithm should exploit or explore its surroundings you generate a value between zero and one and see if it is larger than Epsilon. If it is less than you should explore the given environment. As the algorithm develops a better understanding of the rewards Epsilon is decayed through various methods to force the actor to exploit more frequently than explore (Beysolow II, 2019, pp. 59-60).</p>

<p>Finally, we can speak to the Actor-Critic (A2C) algorithm. The A2C is considered a hybrid of both value-based (Q-Learning) and policy-based (Reinforce) algorithms. Where two models coexist to create an optimal process. The actor is typically initialized with a policy-based algorithm like Reinforce while the Critic is initialized with a value-based algorithm. In the cart pole problem, the actor must determine the optimal action by estimating where to place the cart to create the optimal angle. During this the critic will evaluate the action the actor has taken and provide feedback on how a more optimal action could’ve been made during that state (Beysolow II, 2019, p. 11). This critic feedback is used to update the actors policy on how it should decide in the future the best action. Slowly the actor and critic work as a team that better defines the weights in the policy.</p>

<p>Policy gradient approaches differ from value-based approaches by factors such as how the policies are represented, how the policy is optimized, and differ in convergence abilities. Policy gradients will map states to actions and outputs the rewards associated to each action in that state. A value-based approach on the other hand will focus on its value function to estimate the returned reward for each action based on the cumulative future rewards. These rewards are calculated from previous values it acquired during training. Both functions have exploration functions. Policy gradients typically stay clear of the over estimation issues that value functions provided a value-based Q-learning model (Beysolow II, 2019, p. 20), but they suffer in samples required to converge on an optimal path. Finally, a value-based model will spend its time optimizing a value function rather than the parameters of the policy itself.</p>

<p>Actor-Critic (A2C) and solely value or policy-based approaches differ primarily in how many models are active in the data collection process. In policy or value-based approaches there is one model updating the weights for decision-making but in an A2C model there are two active models, each of which have a different approach. The A2C model is efficient in its exploration because it has the value-based critic guiding the policy driven actor while a value-based or policy-based approach may suffer from over exploitation or high variance. Since A2C models can explore more effectively they typically converge on an optimal path faster than a value or policy-based approach alone (Beysolow II, 2019, p. 38). Value and Policy-based approaches are opposite sides of the spectrum and when placed together complement each other’s shortcomings.</p>

<p><strong>References</strong></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>- Beysolow II, T. (2019). Applied Reinforcement. Apress.
- Yoon, C. (2019, February 6). Understanding Actor Critic Methods and A2C. Retrieved from Towards Data Science: https://web.archive.org/web/20200526014757/https://towardsdatascience.com/understanding-actor-critic-methods-931b97b6df3f?gi=a701bcc17ce
</code></pre></div></div>]]></content><author><name>Cade Bray</name><email>Bray.cade@gmail.com</email></author><summary type="html"><![CDATA[The cartpole problem is a notorious reinforcement learning objective where the goal is to balance a pole on a cart that has two directional movements. These movements combined with the velocity of the cart can change the angle of the pole in relation to the cart. The objective is just to balance the pole on the cart given the two controls, moving the cart left and right. At each stage of a given episode the state of the cart, pole angles, and cart velocity is collected and processed in some form to determine the highest possible reward or lowest penalty. How the next action is calculated is through various algorithms that might include two common ones known as Reinforce and A2C for an actor.]]></summary></entry><entry><title type="html">Algorithm Ciphers</title><link href="https://cadebray.com/2024/08/04/algorithm-ciphers.html" rel="alternate" type="text/html" title="Algorithm Ciphers" /><published>2024-08-04T00:00:00+00:00</published><updated>2024-08-04T00:00:00+00:00</updated><id>https://cadebray.com/2024/08/04/algorithm-ciphers</id><content type="html" xml:base="https://cadebray.com/2024/08/04/algorithm-ciphers.html"><![CDATA[<p>Cryptography is changing rapidly, and many considerations need to be made while selecting the appropriate method of securing data. In this post we are working with securing data at rest instead of in transit, as such we do not need to concern ourselves with TLS encryption suites in this post. <!--more--> The difficulty with encrypting data at rest versus data in transit is we do not have an industry standard method such as TLS/SSL (Oracle, n.d.). Data at rest has two categories to consider when evaluating its well-being which are digital attacks and physical attacks. Digital attacks could be a bad actor with intent to obtain the data through means such as using falsified or stolen credentials, copies of data moved to unsecure locations, or even ransomware attacks where the attacker may not care for the data itself but hold the data ransom for the intended recipients (Cloudflare, n.d.). Physical attacks could be from direct access to a hard drive either through theft or copying the content.</p>

<p style="text-align: center;">
  <img src="/assets/cipher.png" alt="cipher" width="400" />
</p>

<p>Some best practices to consider are secure the data physically such as a locked server room, locked sever cabinets, and drive encryption such as BitLocker by Microsoft. Drive encryption makes stolen hard drives useless because the data cannot be accessed without decrypting first with a private key stored on the TPM module of a motherboard. BitLocker uses a widely accepted encryption method called Rijndael algorithm better known as Advanced Encryption Standard (AES). AES can be used with three key sizes, 128 bits, 192 bits, and 256 bits (IBM, n.d.). Bitlocker, which is one of many drive encryption services provided, supports two of these three key sizes, 128 bits and 256 bits. By default, Bitlocker uses 128 bits, but I would recommend using 256-bit key size for increased security. The benefit of using a smaller key size or a less optimal encryption method is the speed in which data can be encrypted and decrypted but could be deficient when it comes to regulatory standards compared to the provided protection an AES 256-bit key size could provide.</p>

<p>Per the announcement of the AES encryption in the Federal Information Processing Standards Publication 197 we can see that there are currently no weak or semi-weak keys identified (NIST, 2001). If the private key were to become known to an attacker, the drive could be decrypted. The private key in drive encryption is typically stored in the TPM module of the motherboard which is tamper resistant (Microsoft, n.d.). The culmination of all these efforts, from AES 256-bit drive encryption to storing the private key in a tamper resistant module provided by hardware manufacturers allows for end-to-end protection from physical attacks on data at rest.</p>

<p>If an attacker could understand how a key is formulated, they could in theory recreate a private key to use for decryption. This is why generating random numbers is crucial for encryption. Many computers collect random information throughout various parts of the hardware such as voltage, temperature, etc. but the pool of various information sources is limited and thus generating a new key could be stalled while the system waits for an accumulation of new random information (Oracle, n.d.). This is in turn used in a cryptographic pseudorandom number generator (CPRNG) that captures a “seed” of random information and further generates data to the specified bit length. Platforms such as Java have built in classes that allow access to CPRNG algorithms. In Java, this class is referred to as ‘SecureRandom.’</p>

<p>The second form of attacks as mentioned before is digital attacks. While this data is not in transit, and if it were it would be encrypted with various TLS cipher suites, it could be accessed from bad actors that gained privileges inside a network to connect to a storage device. Considerations can be made to prevent such attacks such as only allowing internal devices or trusted devices to connect. Credentialling of users that should be able to connect remotely should be heavily monitored and restricted to users that have an absolute need for such an action.</p>

<p>Government regulations could include the Gramm-Leach-Bliley Act that requires financial institutions, such as Artemis Financial, to secure their data. To justify using AES 256-bit encryption we can turn our attention to how the United States Federal Government secures their classified data with AES 256-bit encryption (Wong, 2022). If the US government is leading the way with encryption and is using the same method recommended, we can assume it would be sufficient for the Gramm-Leach-Bliley Act regulatory standards.</p>

<p><strong>References</strong></p>

<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>- Oracle (n.d.). Java Security Standard Algorithm Names. https://docs.oracle.com/javase/9/docs/specs/security/standard-names.html#cipher-algorithm-names
- Manico, J., &amp; Detlefsen, A. (2014). Iron-Clad Java. McGraw Hill Computing. https://learning.oreilly.com/library/view/iron-clad-java/9780071835886/?sso_link=yes&amp;sso_link_from=SNHU
- Cloudflare (n.d.). What is data at rest? Retrieved August 4, 2024, from https://www.cloudflare.com/learning/security/glossary/data-at-rest/
- IBM (n.d.). Cryptographic algorithm and key length. IBM Documentation. Retrieved August 4, 2024, from https://www.ibm.com/docs/en/sgklm/4.1.1?topic=overview-cryptographic-algorithm-key-length
- Microsoft (n.d.). BitLocker FAQ. Windows Learn. Retrieved August 4, 2024, from https://learn.microsoft.com/en-us/windows/security/operating-system-security/data- protection/bitlocker/faq#what-form-of-encryption-does-bitlocker-use--is-it-configurable-
- National Institute of Standards and Technology (NIST) (2001). ADVANCED ENCRYPTION STANDARD (AES). Federal Information Processing Standards Publication 197. https://csrc.nist.gov/files/pubs/fips/197/final/docs/fips-197.pdf
- Microsoft (n.d.). What is TPM? Windows Support. Retrieved August 4, 2024, from https://support.microsoft.com/en-us/topic/what-is-tpm-705f241d-025d-4470-80c5-4feeb24fa1ee
- Wong, W. (2022, January 12). Federal government leads the way with encryption standards. Insights. Retrieved August 4, 2024, from https://insights.samsung.com/2022/01/12/federal-government-leads-the-way-with-encryption-standards/
</code></pre></div></div>]]></content><author><name>Cade Bray</name><email>Bray.cade@gmail.com</email></author><summary type="html"><![CDATA[Cryptography is changing rapidly, and many considerations need to be made while selecting the appropriate method of securing data. In this post we are working with securing data at rest instead of in transit, as such we do not need to concern ourselves with TLS encryption suites in this post. The difficulty with encrypting data at rest versus data in transit is we do not have an industry standard method such as TLS/SSL (Oracle, n.d.). Data at rest has two categories to consider when evaluating its well-being which are digital attacks and physical attacks. Digital attacks could be a bad actor with intent to obtain the data through means such as using falsified or stolen credentials, copies of data moved to unsecure locations, or even ransomware attacks where the attacker may not care for the data itself but hold the data ransom for the intended recipients (Cloudflare, n.d.). Physical attacks could be from direct access to a hard drive either through theft or copying the content.]]></summary></entry></feed>