Is it really necessary, or are we just trying to hang on?

Global travel is picking back up in the United States, which remains the world’s largest national consulting market and accounts for about half of the global consultancy demand. But progress has been much slower in other countries, particularly those with fewer people vaccinated against COVID-19. And with advancements in technology bringing our world into a more digital age, it is difficult to say that consultants will ever see a return to the frequency of pre-pandemic business travel life. Not only are businesses seeing that their projects and priorities can be completed just as well remotely as in-person, but they are also saving millions of dollars in funds previously spent on business travel – and reducing carbon emissions in the process. Though many will argue that remote work and meetings can never be as effective as face-to-face interactions (particularly when cultivating client relationships) innovations in technology have proven this wrong and leave us questioning just how necessary business travel is now or will be in the future.

According to a Bloomberg survey of 45 large companies across the U.S., Europe and Asia, 84% of respondents expressed their intention to spend less on business travel post-pandemic. Of the reasons for this drastic change, the primary ones were the “ease and efficiency of virtual software, cost savings and lower carbon emissions” that would result from their reducing business travel. Global consulting firms have projected they will save almost $1 billion dollars by reducing business travel or cutting it altogether. Some are developing new policies that require more justification for travel than previously needed, particularly if there has been a large investment in digitalization that has allowed operations to be maintained and even grow in spite of the pandemic. Technology-focused consulting firm Capgemini, for example, is considering having “a travel cap,” which would cause the firm to adopt a “zero-based budget approach,” and require them to question the need and ROI of every single trip beginning as early as 2022. Though this is certainly a change in culture for a lot of companies, these new practices would save firms like Capgemini millions of dollars that were previously spent on travel fees pre-pandemic.

Many companies were exploring the benefits of allowing more remote work and flex schedules even before the pandemic began. But, with 60% of global firms having seen significant increases in productivity levels during the 2020 lockdown while their employees were mostly working from home, there is even more incentive to consider these changes that would keep employees happy and maintain the momentum needed to continue this surge in productivity even beyond 2021. Allowing employees to work remotely and on a flex schedule would also remove the geographical limitations typically found within candidate searches, and would thus increase the opportunity to hire more top talent. How necessary is it to send an employee to meet with a client whose business is in Dubai, if you begin to hire people who live in or nearer to Dubai?

We are seeing the world come into a new era, and not everyone is ready for change. But some companies are finding the growth of digitalization and the scale of change have been more of a problem for them than actual COVID-19. The pandemic certainly accelerated things, but these challenges have been present on a global scale for some time long before 2020. 

Is this change in business travel here to stay? Or do you think we’ll see it pick up again? Share your thoughts in the comments!

by LaShaune R. Littlejohn of Phoenix Star Creative, LLC

Taking a Zero-Trust Approach to Cybersecurity

Cyber breaches have become increasingly devastating over the last few years, with damaging effects extending into the day-to-day operations of the federal government. Though remote work has become a societal norm and less of a business perk, organizations continue to become more and more vulnerable to cyberattacks. Federal cybersecurity personnel must continuously ascertain risk levels to ensure that users can be trusted. If the user’s risk level is not constantly checked, an attacker who had previously gained access to the system can easily maneuver around the agency’s networks far and wide without being detected.

The Biden Administration is prioritizing this issue, implementing a first-ever policy dedicated to addressing the need for a major overhaul of cybersecurity processes across all federal agencies. The executive order, signed off on May 12, provides guidance and timeframes for public and private organizations alike to implement important technology and process improvements. In September 2021, the Administration further pushed on this notion with draft guidance instructing federal agencies to adopt the tried-and-true cybersecurity philosophy of “trust no one,” otherwise known as the zero-trust approach.

Zero-trust views any user, device and application as a potential threat, requiring repeated verification and limiting user access on an application-by-application basis. It does not grant every user privileges across the network, and links verified identities of user logins to only the specific applications they use for their operations. This tremendously reduces the occurrence of cyberattacks aimed at infiltrating entire networks through one illegally infiltrated account. This is a major stronghold in the cybersecurity field, but does require intensive “back-end work” with a professional IT team. For a dimension as wide as the government’s scope, this could mean mapping access control for hundreds of thousands of applications to millions of users across the nation.

Given a deadline of September 30, 2024, federal agencies are being required to establish the following practices and protocols to improve their cybersecurity infrastructure:

  • Create an inventory of all user devices;
  • Encrypt networks;
  • Implement a single sign-on authentication protocol for secure logins;
  • Treat all applications as Internet-connected; and
  • Improve data monitoring across computer networks.

While it’s relatively good to see the federal government making essential and impactful moves towards strengthening cybersecurity – especially in a post-COVID world where remote work is becoming more globally embraced and has forced many private-sector businesses to become familiar with and implement zero-trust network access (ZTNA) technologies – the demanded timeline, compared to the (lack of) availability of ample resources, for adoption of these protocols is rather aggressive and out of reach for many. An effective Zero-trust strategy must be able to cater to all potential access points, from the endpoint view all the way to the cloud.

It’s vital that federal agency leaders comprehend what exactly is necessary for the implementation of Zero-trust, with key emphasis on the loopholes that may render efforts ineffective and prevent a successful government transition to Zero-trust.

by LaShaune R. Littlejohn of Phoenix Star Creative LLC

Using Smart IDs to Implement Zero-Trust

The Identity Theft Resource Center (ITRC) reported that the number of public data breaches from January-September 2021 exceeded the number of similar breaches from all of 2020 alone. With ransomware attacks in 2021 passing the number of such attacks in 2020 and 2019 combined, the number of public data breaches is well on its way to exceeding 2017’s record for the most public data breaches in a single year, with phishing and ransomware being the top two most frequent types of cyberattacks. Some government agencies such as the Department of Defense (DoD) have combatted this through the implementation of ID smart cards known as Common Access Cards (CACs).

Introduced in the early 2000’s, CACs are a form of multi-factor authentication. Requiring both information the user knows (their PIN) and something the user has (their Common Access Card), CACs solve a major hurdle in the battle to protect government assets from cyber threats and have become instrumental in protecting the identity verification elements necessary for a zero-trust framework. Worldwide shipments of smart Government ID cards is estimated to rise from 500.1 million in 2020 to 554 million in 2021. The more this school of thought is adapted and infiltrated among more government agencies, the easier it will be for professional IT teams to necessitate a zero-trust network access (ZTNA) technologies transition.

But factors that continue to negatively affect the production of CACs include the future state of the COVID-19 global pandemic, as well as the ongoing global chip shortage that has led to major price gouging and is likely to continue even after 2022. The federal government took steps to address this issue earlier this year, with the Senate passing the CHIPS for America Act in June 2021 which included $52 billion allocated to incentivize U.S. manufacturers and suppliers to increase domestic semiconductor production needed to create more ID smart cards within all industries including the federal government. With the bill stalled at the House over the last six (6) months, the Biden Administration is urging Congress to pass the bill before Christmas this year. The passing of this bill would greatly strengthen the U.S. economy and national security, ensuring the U.S. remains the world leader in technology advancement. 

by LaShaune R. Littlejohn of Phoenix Star Creative, LLC

Bringing More Reliable Security to Cloud-Based Networks

For an effective zero-trust architecture, agencies require all-inclusive solutions that can cater to the loopholes in standard zero-trust methodologies. One way to achieve this is with a secure access service edge (SASE) platform. SASE is a subset of cloud security that embodies security policies in a way suited for identity, contextual scenarios and constantly monitors and gauges risk, extending safety to cloud-based apps. SASE guards access to an organization’s cloud network no matter the location of the devices requesting access, and it encapsulates key cloud-based security technologies, such as cloud access security broker (CASB) and zero-trust network access (ZTNA). To put it simply, SASE allows users to connect to applications remotely regardless of where they are located, while also keeping corporate security controls and policies in place. 

Within the scope of SASE, CASB does the function of crucial monitoring of cloud-based apps, risks, and uncommon privilege changes. Cloud environment changes happen in real-time, so should the monitoring in any manner. ZTNA provides users undisrupted and reliable connectivity to private applications without ever having them on the network or availing apps to the internet. These technologies build upon an effective zero-trust strategy. It is predicted that by 2025, at least 60% of enterprises will have specific strategies for SASE adoption. 

Stacking security applications on top of the platform will significantly extend an agency’s zero-trust strategy from endpoint to cloud. This will be inclusive of solutions like antivirus and anti-malware programs that can detect viruses and malware while they’re being downloaded to a device. It also includes techniques such as enterprise digital rights management (EDRM), data loss prevention, etc. 

EDRM encrypts files to ensure access policies. When agencies can collect the sensitive data being transferred, the SASE platform ensures this inclusivity of all cloud apps when agencies collect sensitive information being transferred and set flexible guidelines for user access based on data such as the victim’s identity, location, etc. 

The government should continuously push forth with the public cloud and use the Federal Risk and Authorization Management Program (FedRAMP) to ascertain that the applications running on its clouds are within a set of security standards. Making use of third-party vendors, rather than spending a lot of time and money in constructing their IT systems and infrastructure, will assist in formulating a more cost-sensitive and reliable route to having an effective zero-trust platform.  

 by LaShaune R. Littlejohn of Phoenix Star Creative, LLC

Representatives from 30 countries and the European Union met for two days last week during a White House-convened summit, developing an international coalition committed to fighting the universal problem of ransomware. Though the specifics are still being ironed out, big-picture goals for this initiative include improvement of law enforcement collaboration, reducing the illegal circulation of cryptocurrency, and developing a more diplomatic conversation around these urgent issues. Ransomware cost our world $20 billion this year alone – and the number keeps rising, with the average ransom demand in 2021 reaching over $220,000 (43% higher than the average ransom demanded in 2020) with the U.S. reporting over $400 million being paid in ransom demands globally this year to-date.

Participating countries include the United States, Ukraine, Germany, Bulgaria, Canada, South Korea, India, Brazil, Nigeria, and the United Arab Emirates. It should come as no surprise that Russia, China, Iran, and North Korea were among the countries not invited to participate in the global virtual summit last week, as they are among countries found to be harboring cybercriminals. However, it also makes sense that these countries eventually need to be brought into the conversation for there to be a significant impact to reduce ransomware attacks.

Call me an optimist but, hopefully, this initiative causes a ripple effect that encourages similar global alliances around other important issues facing this world. I am eager to learn more about the plan and strategy once they have been developed and see the change that something like this could bring. Beyond the potential reduction in delays and restrictions caused by sanctions and other red tape, so many global leaders together on one accord committing to the development and execution of a plan of this magnitude is historic in itself. With the U.S. facing a shortage of skilled cybersecurity professionals –and as current professionals are leaving their jobs due to burnout and skills gaps among other frustrations – it is encouraging to see the U.S. and other world powers work together to address this escalating and shared threat. 

by LaShaune R. Littlejohn of Phoenix Star Creative LLC

Today is Giving Tuesday, a day each year when millions of people all over the world donate to a variety of charitable causes. The event, held annually since 2012 on the Tuesday after Thanksgiving, is known worldwide as a “global generosity movement” that unleashes the power of people and organizations to support life-changing and impactful transformations through donations of kindness, time, funds, goods, and advocacy. Even in the midst of COVID, over $2.47 billion dollars was raised on GivingTuesday 2020 – a significant increase from the $1.97 billion dollars raised during GivingTuesday 2019. There’s good reason to believe that Giving Tuesday 2021 will see just as much, if not more of, a success than previous years.

For those of us who will be among the millions of donors, volunteers and social media engagers looking for organizations to support, it is really important that we do our due diligence and make sure we know about the organizations we want to support. It is not enough to simply know their mission and location, we must also confirm that we are donating money, services and time to actual organizations, not cybercriminals pretending to be from said organizations. We would like to think that cybercriminals have some sort of moral code preventing them from scamming certain businesses or groups of people (i.e., people within our most vulnerable populations). Sadly, this is not the case, and it is up to us to take the necessary precautions to keep ourselves and our assets protected against cybercrime.

The Federal Trade Commission gives great tips and information regarding the minimal precautions one should practice when preparing to donate to and/or engage with charitable organizations online. These include doing your research on the organization before you donate or commit to support; being mindful of how you pay; recording your donations; and paying attention to that little voice in your head warning you when someone is trying to pressure you into donating a certain way if it doesn’t seem right. 

by LaShaune R. Littlejohn of Phoenix Star Creative, LLC

Cyber breaches have become increasingly devastating over the last few years, with damaging effects extending into the day-to-day operations of the federal government. Though remote work has become a societal norm and less of a business perk, organizations continue to become more and more vulnerable to cyberattacks. Federal cybersecurity personnel must continuously ascertain risk levels to ensure that users can be trusted. If the user’s risk level is not constantly checked, an attacker who had previously gained access to the system can easily maneuver around the agency’s networks far and wide without being detected.

The Biden Administration is prioritizing this issue, implementing a first-ever policy dedicated to addressing the need for a major overhaul of cybersecurity processes across all federal agencies. The executive order, signed off on May 12, provides guidance and timeframes for public and private organizations alike to implement important technology and process improvements. In September 2021, the Administration further pushed on this notion with draft guidance instructing federal agencies to adopt the tried-and-true cybersecurity philosophy of “trust no one,” otherwise known as the zero-trust approach.

Zero-trust views any user, device and application as a potential threat, requiring repeated verification and limiting user access on an application-by-application basis. It does not grant every user privileges across the network, and links verified identities of user logins to only the specific applications they use for their operations. This tremendously reduces the occurrence of cyberattacks aimed at infiltrating entire networks through one illegally infiltrated account. This is a major stronghold in the cybersecurity field, but does require intensive “back-end work” with a professional IT team. For a dimension as wide as the government’s scope, this could mean mapping access control for hundreds of thousands of applications to millions of users across the nation.

Given a deadline of September 30, 2024, federal agencies are being required to establish the following practices and protocols to improve their cybersecurity infrastructure:

  • Create an inventory of all user devices;
  • Encrypt networks;
  • Implement a single sign-on authentication protocol for secure logins;
  • Treat all applications as Internet-connected; and
  • Improve data monitoring across computer networks.

While it’s relatively good to see the federal government making essential and impactful moves towards strengthening cybersecurity – especially in a post-COVID world where remote work is becoming more globally embraced and has forced many private-sector businesses to become familiar with and implement zero-trust network access (ZTNA) technologies – the demanded timeline, compared to the (lack of) availability of ample resources, for adoption of these protocols is rather aggressive and out of reach for many. An effective Zero-trust strategy must be able to cater to all potential access points, from the endpoint view all the way to the cloud.

It’s vital that federal agency leaders comprehend what exactly is necessary for the implementation of Zero-trust, with key emphasis on the loopholes that may render efforts ineffective and prevent a successful government transition to Zero-trust.

by LaShaune R. Littlejohn of Phoenix Star Creative LLC

We have seen a mass migration of data and applications form local datacenters to the public cloud.  Now while the cloud is not perfect it has a lot going for it.  The positives include an elastic, resilient and fault tolerant architecture. The cons are that its highly configurable and requires the proper controls, monitoring and service architecture to make it secure.  

So there are several questions that security leadership must confront and ignoring the cloud is no longer feasible.  Here are some questions to reflect on before engaging with infrastructure, application and DevOps teams.

  1. Do we have an overall strategy and approach to secure Cloud (SaaS, PaaS and IaaS)?
  2. Do we leverage multifactor authentication for all Cloud services. If so, are governance gates in place to prevent shadow cloud?
  3. How do we inventory and track cloud assets? Who are the owners, are we tagging resources?
  4. Are all of the cloud logs being monitored and what anomalies are we alerting on?
  5. Are our cloud or multi-cloud configurations monitored for compliance and for over exposed access or permissions?
  6. How are vulnerabilities discovered and managed in our cloud computing environment?  Do we leverage golden images for virtual machines?  Do we patch or replace vulnerable assets?
  7. Who manages the cloud account and subscriptions and what is the default security policies and overall security governance for new cloud accounts?
  8. Do we have an encryption standard for the cloud?  Where are keys stored, HSM?  What aren’t we encrypting and why? 
  9. How are privileged accounts managed? Where are the sensitive passwords stored, how often are they rotated.  How are runtime secrets secured? PIM vs PAM
  10. How is our cloud networking secured, zero trust, remote access, firewalls,  Internet Egress, URL filtering and remote browser isolation
  11. How are IAM resource policies provisioned and how many resources contain star (*) permissions?
  12. Do we have centralized WAF / DDOS / CDN / DNS services in place for cloud services?
  13. How do we secure API’s and do we have an API gateway with proper security controls and governance?
  14. How are we controlling access to SaaS solutions and are we monitoring for sensitive data loss or exfiltration? Think CASB.
  15. How is data in transit secured and how are certificates, PKI managed for cloud endpoints?
  16. Does our threat intelligence cover the cloud?  Are SecOps actively engaged to investigate cloud based incidents and events?
  17. Does our GitHub type accounts contain any sensitive information? How do we know?
  18. How are code vulnerabilities identified, risk prioritized and remediated?  Do we leverage static, dynamic, open source code scanning or a bug bounty program?
  19. What is the strategy for cloud security services such as, big data analytics, Kubernetes, Docker, Serverless, microservices and IOT to name a few.
  20. Does our CI/CD pipeline have security gates. How can security shift left and prevent inferior code and configurations before deployment?

Part of the misconception surrounding cloud security has been the preoccupation of the underlying hardware, hypervisor and shared public hosting.  This centered around hosting compliance reports and certifications generated by cloud vendors (AWS, Azure, Google) essentially saying they run a tight compliant ship.  They in fact don’t mix their clients peanut butter and chocolate.  These are critically important but should not be confused with securing the highly configurable cloud services and understanding the shared responsibility model.  We should be able to stipulate that AWS, Azure and Google can run a datacenter better than most and better than most governments.  The vulnerable pieces of the puzzle are, how data and services are configured and how anomalies are monitored.  Errors or omissions of the smallest variety can expose sensitive data and lead to a breach.

Where should we start.  There is only one place to start and it’s with the security organizational design.   Cyber cloud security must be more decentralized and work closely with developers, platform engineers and architects.  This has been traditionally the role of a cloud security architect and this is appropriate for smaller or early adopters of public cloud.  Larger and more mature cloud consumers need security in the trenches.  This has brought about a relatively newer function called “DevSecOps”.  Ideally, (cloud, security and enterprise) architects set the overall strategy, controls framework and evangelize the security design and requirements.  This new functional group of (DevSecOps) security engineers help DevOps work through the security design and best practices.  The goal is to create the proper feedback loops so the security gates can be adjusted to emerging challenges to their adoption.  This shifting to the left, allows security to Continually Improve and Continuously Deliver at birth rather than security as a painful expensive afterthought.  This investment has to start at the organizational level, bring security to the trenches and security will continually improve.

By Mike Donovan

The biggest mistake people make when they look for a new opportunity is that they can’t articulate what they are looking for in a role.

We all have a speciality, even though we touch on many other areas. People often apply to all these jobs that touch on a piece of what we do, but not our core skill.

Then, you get on a call with a recruiter and you sound uncertain. The moment someone sounds uncertain, recruiters move on. They get so many resumes they don’t have time to figure out your skill set for you. That is your job to know your worth and sell yourself! Confidence in your ability will always move you on to the next phone interview.

Advice: before you apply for a job, do a soul search of what makes you you. What are you better at than anyone else? If you are looking to move into another area, volunteer on some of those projects in your current role, seek a mentor that can guide you through the change, look for certifications within the specialty.

Don’t sell yourself short if you want to stand out.

You must know your worth.