The cryptocurrency market is currently experiencing a strong recovery after an extended downtrend. Most prices are up by an average of 40% since the beginning of the year. With this resurgence in the market, it is more important than ever to identify promising cryptocurrencies that have the potential to double or even triple in value. In this article, we will be highlighting the top 5 altcoins under 3$, which we believe have the potential for explosive growth in the coming months.
How is the Crypto Market performing today?
Over the past month, the cryptocurrency market has experienced a significant surge, with prices up by more than 35%. This rise in value can largely be attributed to Bitcoin’s breach of its 20K price mark, which acted as a catalyst for the entire market. Many investors have been flocking to cryptocurrencies as a safe haven investment, as the global markets remain risky and the US dollar begins to deteriorate. In times of uncertainty, cryptocurrencies have become a popular alternative for investors seeking to diversify their portfolios and protect their wealth. As a result, the demand for cryptocurrencies has increased, driving up their prices and fueling the market’s upward trajectory.
Top 5 Altcoins under 3$ to Keep on your Radar
#5 Stellar (XLM)
XLM stands for Stellar Lumens, which is a digital currency or cryptocurrency that operates on the Stellar blockchain network. The Stellar network was created in 2014 with the goal of enabling fast, secure, and low-cost cross-border transactions. Stellar Lumens is the native asset of the Stellar network and is used to facilitate transactions and payments between different currencies, both traditional and digital. Stellar Lumens are also used to pay transaction fees on the Stellar network, making it a utility token as well. Unlike some other cryptocurrencies, Stellar Lumens does not rely on proof-of-work mining, but instead uses a consensus algorithm called Stellar Consensus Protocol (SCP). The total supply of XLM is capped at 50 billion, with a significant portion already in circulation.
#4 Shiba Inu (SHIB)
Shiba Inu Coin, often referred to as simply SHIB, is a cryptocurrency that was created in August 2020. It is an ERC-20 token that runs on the Ethereum blockchain, and was designed to be a decentralized alternative to existing cryptocurrencies like Bitcoin and Ethereum. Shiba Inu Coin is named after the Shiba Inu dog breed, which is also the mascot of the cryptocurrency.
Shiba Inu is about to release its own blockchain, which is why investors are highly anticipating what would happen next for this crypto project.
Hi
#3 Cardano (ADA)
Cardano is a blockchain platform and cryptocurrency that was created in 2017 by a team led by Charles Hoskinson, a co-founder of Ethereum. The platform was designed to address some of the limitations of existing blockchain platforms, such as scalability and interoperability, by using a unique proof-of-stake consensus algorithm called Ouroboros.
One of the reasons why Cardano has attracted attention from investors is its strong emphasis on academic research and peer-reviewed scientific principles, which sets it apart from many other cryptocurrencies. The Cardano team has partnered with several universities and research institutions to develop the platform and ensure that it is based on sound principles and best practices.
Cardano also offers several features that make it attractive as an investment, including its ability to facilitate fast and low-cost transactions, its ability to run smart contracts and decentralized applications, and its focus on sustainability and environmental friendliness.
#2 Ripple (XRP)
XRP is a cryptocurrency that was created by Ripple Labs in 2012, and is designed to be a fast and efficient way to facilitate cross-border payments and transfers. Unlike some other cryptocurrencies, XRP is not mined, but instead relies on a consensus algorithm called the Ripple Protocol Consensus Algorithm (RPCA) to validate transactions.
One of the reasons why XRP may be considered a good investment is its focus on solving a real-world problem in the financial industry: the slow and expensive process of cross-border payments. Ripple Labs has developed partnerships with several financial institutions around the world, and XRP is being used by some of these institutions as a means of facilitating faster and cheaper transactions.
Analysts think that even if Ripple loses the SEC lawsuit, the company will simply pay a hefty fee and continue to operate normally. XRP prices might take a hit in the short-term but will grow back in the long-term.
#1 Polygon (MATIC)
Matic, also known as Polygon, is a layer 2 scaling solution for the Ethereum blockchain. It is designed to address some of the scalability and transaction speed limitations of the Ethereum network by offering a faster and more efficient alternative for decentralized applications (dApps) and smart contracts. The Polygon network consists of several interconnected blockchains, and allows for faster and cheaper transactions while maintaining compatibility with the Ethereum ecosystem.
One of the reasons why Matic may be considered a good buy is its potential for adoption and growth. The Ethereum network has seen significant growth in recent years, with an increasing number of dApps and users. As the Ethereum network continues to expand, the demand for layer 2 scaling solutions like Matic is likely to increase, potentially leading to a rise in its value.
How Coinbase is using Relay and GraphQL to enable hypergrowth
By Chris Erickson and Terence Bezman
A little over a year ago, Coinbase completed the migration of our primary mobile application to React Native. During the migration, we realized that our existing approach to data (REST endpoints and a homebuilt REST data fetching library) was not going to keep up with the hypergrowth that we were experiencing as a company.
“Hypergrowth” is an overused buzzword, so let’s clarify what we mean in this context. In the 12 months after we migrated to the React Native app, our API traffic grew by 10x and we increased the number of supported assets by 5x. In the same timeframe, the number of monthly contributors on our core apps tripled to ~300. With these additions came a corresponding increase in new features and experiments, and we don’t see this growth slowing down any time soon (we’re looking to hire another 2,000 across Product, Engineering, and Design this year alone).
To manage this growth, we decided to migrate our applications to GraphQL and Relay. This shift has enabled us to holistically solve some of the biggest challenges that we were facing related to API evolution, nested pagination, and application architecture.
API evolution
GraphQL was initially proposed as an approach to help with API evolution and request aggregation.
Previously, in order to limit concurrent requests, we would create various endpoints to aggregate data for a particular view (e.g., the Dashboard). However, as features changed, these endpoints kept growing and fields that were no longer used could not safely be removed — as it was impossible to know if an old client was still using them.
In its end state, we were limited by an inefficient system, as illustrated by a few anecdotes:
An existing web dashboard endpoint was repurposed for a new home screen. This endpoint was responsible for 14% of our total backend load. Unfortunately, the new dashboard was only using this endpoint for a single, boolean field.
Our user endpoint had become so bloated that it was a nearly 8MB response — but no client actually needed all of this data.
The mobile app had to make 25 parallel API calls on startup, but at the time React Native was limiting us to 4 parallel calls, causing an unmitigatable waterfall.
Each of these could be solved in isolation using various techniques (better process, API versioning, etc.), which are challenging to implement while the company is growing at such a rapid rate.
Luckily, this is exactly what GraphQL was created for. With GraphQL, the client can make a single request, fetching only the data it needs for the view it is showing. (In fact, with Relay we can require they only request the data they need — more on that later.) This leads to faster requests, reduced network traffic, lower load on our backend services, and an overall faster application.
Nested pagination
When Coinbase supported 5 assets, the application was able to make a couple of requests: one to get the assets (5), and another to get the wallet addresses (up to 10) for those assets, and stitch them together on the client. However, this model doesn’t work well when a dataset gets large enough to need pagination. Either you have an unacceptably large page size (which reduces your API performance), or you are left with cumbersome APIs and waterfalling requests.
If you’re not familiar, a waterfall in this context happens when the client has to first ask for a page of assets (give me the first 10 supported assets), and then has to ask for the wallets for those assets (give me wallets for ‘BTC’, ‘ETH’, ‘LTC’, ‘DOGE’, ‘SOL’, …). Because the second request is dependent on the first, it creates a request waterfall. When these dependent requests are made from the client, their combined latency can lead to terrible performance.
This is another problem that GraphQL solves: it allows related data to be nested in the request, moving this waterfall to the backend server that can combine these requests with much lower latency.
Application architecture
We chose Relay as our GraphQL client library which has delivered a number of unexpected benefits. The migration has been challenging in that evolving our code to follow idiomatic Relay practices has taken longer than expected. However, the benefits of Relay (colocation, decoupling, elimination of client waterfalls, performance, and malleability) have had a much more positive impact than we’d ever predicted.
Simply put, Relay is unique among GraphQL client libraries in how it allows an application to scale to more contributors while remaining malleable and performant.
These benefits stem from Relay’s pattern of using fragments to colocate data dependencies within the components that render the data. If a component needs data, it has to be passed via a special kind of prop. These props are opaque (the parent component only knows that it needs to pass a {ChildComponentName}Fragment without knowing what it contains), which limits inter-component coupling. The fragments also ensure that a component only reads fields that it explicitly asked for, decreasing coupling with the underlying data. This increases malleability, safety, and performance. The Relay Compiler in turn is able to aggregate fragments into a single query, which avoids both client waterfalls and requesting the same data multiple times.
That’s all pretty abstract, so consider a simple React component that fetches data from a REST API and renders a list (This is similar to what you’d build using React Query, SWR, or even Apollo):
A few observations:
The AssetList component is going to cause a network request to occur, but this is opaque to the component that renders it. This makes it nearly impossible to pre-load this data using static analysis.
Likewise, AssetPriceAndBalance causes another network call, but will also cause a waterfall, as the request won’t be started until the parent components have finished fetching its data and rendering the list items. (The React team discusses this in when they discuss “fetch-on-render”)
AssetList and AssetListItem are tightly coupled — the AssetList must provide an asset object that contains all the fields required by the subtree. Also, AssetHeader requires an entire Asset to be passed in, while only using a single field.
Any time any data for a single asset changes, the entire list will be re-rendered.
While this is a trivial example, one can imagine how a few dozen components like this on a screen might interact to create a large number of component-loading data fetching waterfalls. Some approaches try to solve this by moving all of the data fetching calls to the top of the component tree (e.g., associate them with the route). However, this process is manual and error-prone, with the data dependencies being duplicated and likely to get out of sync. It also doesn’t solve the coupling and performance issues.
Relay solves these types of issues by design.
Let’s look at the same thing written with Relay:
How do our prior observations fare?
AssetList no longer has hidden data dependencies: it clearly exposes the fact that it requires data via its props.
Because the component is transparent about its need for data, all of the data requirements for a page can be grouped together and requested before rendering is ever started. This eliminates client waterfalls without engineers ever having to think about them.
While requiring the data to be passed through the tree as props, Relay allows this to be done in a way that does not create additional coupling (because the fields are only accessible by the child component). The AssetList knows that it needs to pass the AssetListItem an AssetListItemFragmentRef, without knowing what that contains. (Compare this to route-based data loading, where data requirements are duplicated on the components and the route, and must be kept in sync.)
This makes our code more malleable and easy to evolve — a list item can be changed in isolation without touching any other part of the application. If it needs new fields, it adds them to its fragment. When it stops needing a field, it removes it without having to be concerned that it will break another part of the app. All of this is enforced via type checking and lint rules. This also solves the API evolution problem mentioned at the beginning of this post: clients stop requesting data when it is no longer used, and eventually the fields can be removed from the schema.
Because the data dependencies are locally declared, React and Relay are able to optimize rendering: if the price for an asset changes, ONLY the components that actually show that price will need to be re-rendered.
While on a trivial application these benefits might not be a huge deal, it is difficult to overstate their impact on a large codebase with hundreds of weekly contributors. Perhaps it is best captured by this phrase from the recent ReactConf Relay talk: Relay lets you, “think locally, and optimize globally.”
Where do we go from here?
Migrating our applications to GraphQL and Relay is just the beginning. We have a lot more work to do to continue to flesh out GraphQL at Coinbase. Here are a few things on the roadmap:
Incremental delivery
Coinbase’s GraphQL API depends on many upstream services — some of which are slower than others. By default, GraphQL won’t send its response until all of the data is ready, meaning a query will be as slow as the slowest upstream service. This can be detrimental to application performance: a low-priority UI element that has a slow backend can degrade the performance of an entire page.
To solve this, the GraphQL community has been standardizing on a new directive called @defer. This allows sections of a query to be marked as “low priority”. The GraphQL server will send down the first chunk as soon as all of the required data is ready, and will stream the deferred parts down as they are available.
Live queries
Coinbase applications tend to have a lot of rapidly changing data (e.g. crypto prices and balances). Traditionally, we’ve used things like Pusher or other proprietary solutions to keep data up-to-date. With GraphQL, we can use Subscriptions for delivering live updates. However, we feel that Subscriptions are not an ideal tool for our needs, and plan to explore the use of Live Queries (more on this in a blog post down the road).
Edge caching
Coinbase is dedicated to increasing global economic freedom. To this end, we are working to make our products performant no matter where you live, including areas with slow data connections. To help make this a reality, we’d like to build and deploy a global, secure, reliable, and consistent edge caching layer to decrease total roundtrip time for all queries.
Collaboration with Relay
The Relay team has done a wonderful job and we’re incredibly grateful for the extra work they’ve done to let the world take advantage of their learnings at Meta. Going forward, we would like to turn this one-way relationship into a two-way relationship. Starting in Q2, Coinbase will be lending resources to help work on Relay OSS. We’re very excited to help push Relay forward!
Are you interested in solving big problems at an ever-growing scale? Come join us!