Satellites, Partnerships and Technicalities of Nexus Explained by Its Founder
24th Apr 2018
This is the second part of our interview with Colin Cantrell (Nexus Founder) and Alex El-Nemer (Nexus Global Business Development) which took place in London on 23rd March 2018. This time, we focused on technical aspects, satellite launches, partnerships and more.
LL: From what we’ve read in your newsletter from March, you’re soon to launch the first satellite, and it’s going to be launched by Vector Space Systems. Is there anything on the Nexus side that you’ve got to take care of, for example paperwork, maybe the FCC license?
Colin: The FCC is the challenging part. The FCC is country determined, so we really have to work with the ITU, which is the International Telecommunications Union. Right now we’re focusing on UHF-VHF which is an easier license to work for. That definitely is a big hurdle that we need to overcome.
Now, Vector launching Galactic Sky is also going to be helping us get access to it. We’re launching our own satellites to create a parallel and new model. This will allow people to understand how to do it themselves. Give them accessibility to it, open source, satellite plants so that anyone can basically put up their own satellite, and create more of a distributed network.
This would result in standardization and purpose so that there can be interoperability, between say our satellites and Galactic Sky’s and somebody else’s which I think will be some of the basis of formation of the distributed satellite grid. Ideally, it can be owned by multiple people and one doesn’t necessarily become more significant than the other.
LL: You’ve mentioned focusing on UHF-VHF. Could you tell us some more on that topic?
Colin: That’s the band we’re using. It basically requires us to get specific radio licenses to start off experimentally. This is the easiest way. Buying a specific band is very challenging, especially since you have one web and you have these other ones, and the space race is really trying to put your foot in the door.
Be above this guy or that guy, you have to buy an international spectrum, because some people launch satellites, and they’ve seen themselves get in trouble because they didn’t get the proper licensing and frequencies. So, that’s something vector’s also doing with galactic sky which is going to give us earlier access. We’re starting with some basic models to figure it out, start doing relays between certain mesh networks to start proving the technology works and then building that model out.
LL: Regarding Vector Space Systems and your father: how significant is the role that they’re playing in the development of Nexus (NXS) in context of launching galactic sky etc. How big is it or how important are they to you?
Colin: Vector is an important piece because they’re allowing us access to the satellite space systems. They give us a foot in the door in that respect. They also provide use cases for viable applications of the technology. In exchange, we provide them with use cases for their Galactic Sky platform, beta testing, and utilization of certain Nexus features to function on the Galactic Sky platform.
I’d say both ways it creates reputability. It also helps show how these things can be used and helps start building initial use cases, to actually make this more commodity based and less speculative and actually getting down to practical applications and grounding it.
LL: Since we’re talking about satellites, you’re talking that the mission of Nexus is decentralizing the decentralization and a big part of that is launching the satellites into lower Earth orbit, but, when they’re already there, they will be in some way controlled by you guys. Some might say that is one point of failure and it’s not really decentralized when it’s basically controlled by one entity which is Nexus in that case. How would you address that matter?
Colin: Okay so that precisely goes back to the previous point I was making. Galactic Sky is something that’s going to function on its own. The nexus satellites that we put are going to function as their own. Obviously having operations command, pointing it, getting your data as well as the telemetry – that’s something you have to handle through a central system.
But the idea is to create a model for the people to replicate; put up and make accessible for other people to deploy their own . Hopefully, to have more and more companies be able to have that interoperability to turn the satellites allows them all to form that, so there is not a singular point. If we were focusing on a singular point it would be easy just to use Galactic Sky. We wouldn’t have a satellite engineer designing it, we wouldn’t be spending money, we wouldn’t be trying to figure out some of these problems.
However, the true issue is answering that specific question. That’s what our first satellite is to help us discover. And not to say that we have it all figured out – but I think the first step is to get it up there, to figure out how it works and make sure we have the tracking.
Then figure out how to distribute your tracking, distribute your operations, create incentive systems for other people to employ their own, and support interoperability between them and Galactic Sky. That way you can kind of create a more collaborative satellite mesh, where everybody shares with one another and you don’t have one specific company reading it.
LL: So you expect other companies to launch satellites with your…?
Colin: That’s where the economic incentive model has a role. Designing that economic incentive model into the satellites gives people a reason to do it. There’s a natural incentive model to it, because you have the space race. It’s almost like computers in 1970. I believe that by combining a lot of these things together we’re going to have a formula for something significant.
Access to other people – that’s one of the ideas. There’s five billion people in the world that don’t have access to reliable internet. Also, there are companies like OneWeb and Elon Musk’s SpaceX. Things are starting to take off.
But the thing is, what’s going to drive people to use it? Tapping it in with the cryptocurrency and the distributed network. Even allowing people to use it for financial reasons to start with. It opens up a lot of new doors.
LL: So about the incentive – are you still working on it?
Colin: We’ve got some ideas with Vector Space Systems as well. There’s a model of how to do it. You make the service providers, let’s say Google or Netflix or Facebook, absorb the cost of the access and then the people being able to access it, using their service and monetizing those people through ads.
It would be something like: “I host my website on yours, you give me this redundancy, I provide this service to people, people use it, connect to it, I make money from those people – that’s how I pay you back.”
LL: Can you give our readers a picture of this satellite, because you’re talking about Elon Musk and many of our readers would have the large scale rockets and satellites in mind, where people operate. You’re going to launch small cubes, correct?
Colin: Correct, the measurements are 10cm x 10cm x 10cm. So, they’re a little like Commodore-64’s. They’re not some space age stuff. Our biggest problem is power constraints being that small, but the technology will improve, so what it’s going to look like, yeah, that’s our 1U.
The next one we’re doing is a 2U, then the next one we’ll probably do a 3U with each U being about 10cm x 10cm x 10cm. It should be about 1kg. Also, the Galactic Sectors, like Vectors Galactic Sky is about 6U, so you’ll see it’s got 3U and then 3U again.
LL: What about the cost of launching these “babies” into space?
Colin: The cost figures, depending on the cubes satellite, are approximately $50-100K for the satellite itself. The most expensive part is actually the connection to operations – interestingly enough.
See, when building a satellite you want to do as much on the ground as possible. You have to have telemetry systems. You need that satellite to point to the ground. Otherwise, it won’t sync with the ground station.
Developing that software and model which can handle all that data is the most expensive part, for now. Although it gets cheaper and cheaper as we progress. Getting the first initial development phase done, going back to the previous distributed model, building that software, open-sourcing it – all of the above enables other people to reduce that bridge cost, so you can make it more affordable in the future.
LL: There’s also the part about providing free internet. Big providers run a profitable business and would probably not like the idea of a competitor offering a free service. How would you address the issue?
Colin: There are a lot of other industries that are disrupted too. The best way that I look at it is to make sure that we do everything by the book, don’t break any laws, be completely clean and transparent. If they ever approach us or they try anything, we say: “Well, this is a new type of emerging industry – do you want to be a part of it or not?”
This will give them certain incentives to get involved. The truth is, they have the worst customer service known to a man. Why? Because they can! Because there’s nothing there to challenge them.
Some people have said, “aren’t you worried about dying, being killed for what you do?”
I know what we’re up against. The best ways that I’ve found to counter it is just being by the book, being transparent and being open to helping them figure out how to adapt. By doing so, you help everybody improve what they’re doing and sort of decentralise the power which I think is healthy for everybody. Look at what happened with Bell Tel in the States.
They had to break it up into a bunch of different companies, and even still AT&T controls most of that. To give you a cost figure too: it costs $300 million to run one cable from Europe to the States. $300million. Just imagine how many satellites you could put up for the cost of that!
LL: Let’s get back on the ground for a moment to talk about the Lower Level Database. It’s a big thing and it’s more or less the foundation of Nexus. It’s exciting, especially given the test results you’ve recently posted. It’s outdone Google and Oracle and their products. How are you able to be orders of magnitude better than for example Berkeley DB? How do you, in such a short period of time, develop something that’s so much better than the competition?
Note: Don’t forget to subscribe to our newsletter for more undervalued cryptocurrencies and interviews.
Colin: One of its powers is simplicity. A lot of things are over-engineered. One of the ideas what I call the Low Level Protocol, the Low Level Database, the Low Level Crypto is it’s as close to the hardware as possible without the softer bottlenecks. A lot of the database systems we have nowadays have been built on and built on and built on. They’ve become very “heavy”. We’re even seeing that with Ethereum.
I’ve isolated the simple necessary functionalities of it. I’ve also made the indexing system modular. It functions and runs so fast is because it’s got a different way to sort keys, the way I do it on the keychain, where one version is a hashmap and one version is a memory map to it, (and developing it to be constant time). I see that as a necessary foundation element to cryptocurrencies especially when you start to look at large amounts of transactions per second, you want to reduce as many bottlenecks as possible.
LL: We’ve also read that you’re working on ACID properties, which is another step for the database to become better and better. So how much time do you estimate that it will take when you finally implement it?
Colin: That’s basically functioning in it. Right now the only thing that’s really missing is a transaction journal, which is to recover from any sort of power loss. You have volatile memory and non-volatile memory.
Volatile memory is basically RAM-ed on volatile – like your hard disk and your disk. If you want to hold certain amount of transactions, you have to have a certain series of events that happen and sometimes correlate that to different pieces in the database. If you can map it to disk in a series before it happens and have that located on even a different sector of your hard drive, you reduce the problems (of corruption) if it crashes, because you’re in the middle of a write, you can pull that out.
The durability is the only part of ACID that’s missing. It was designed that way just because it was necessary and a transaction allows you to change multiple pieces of data at the same time to make sure that everything happens in a sequence. If one part of it fails, you can go back and recover from it. It’s constantly progressing.
I spent a bit more time on the Lower Level Database over the last month, that’s why you saw some of these results. I saw the loading time come from 20 minutes down to 2 minutes, so I figured it was way better but I didn’t really do specific benchmarks against it.
Also, I noticed another correlation: Berkeley DB and Level DB slowed down as the database gets bigger. Eventually, Berkeley becomes unresponsive, it doesn’t even function. Level DB does the same thing – it’s fairly quick when it’s small, but as it gets bigger and bigger it just slows down more and more. The Lower Level Database is fairly consistent in its time. It starts, I think it was 0.78 milliseconds then it gets to 0.81 by 8/9million keys. So it stays fairly constant, which is something that’s necessary.
Something that I always focus primarily on with the algorithms is constant time amputations, wherever possible. Which means, it’ll execute it in the same amount of time, or average the same amount of time no matter the size of the data set. As we get blockchains bigger and bigger, as we have distributed systems and we’re running more of the world’s systems on it, that’s definitely something that should be done.
Doing it now prevents you from a headache later, because you lock yourself into an architecture if you build it specifically on that. If you don’t look at your foundation very, very carefully, you get yourself stuck in there. Cisco did that. Their code is like a blob. You can’t really pull much of it out or interchange pieces. You want to code like it’s a series of blocks and pull one out and change it in with another one. That’s called modular programming, that’s just good design. In a nutshell, it can constantly be changed very easily.
LL: Are you the only one that has this big picture or maybe some of the newly employed developers are helping you out with it to take some of the burden off your shoulders?
Colin: Yeah, they’re taking a lot of the burden off, I think some of them have different perspectives. They’re more engineering and I call myself an architect. I like to engineer though to stay humble. Because you can imagine all these things but when you get down to the actual computation of it, you face the reality.
These guys are taking a lot off. I’m not going to say that I think I have all of the things discovered, I mean, there’s other people like Dino Farinacci. He’s a higher level guy like me, and he’s providing us the internet overlay type of architecture with encrypted packets, etc. He sees up there with me, so he’s actually contributing a lot. He’s a network underlay programmer, so he’s working on an even lower level than me. He’s coding raw IP packets, whereas I’m at the application layer. But I’m with the bottom of the application layer.
LL: So down to the primitive…
Colin: Down to the primitive with a higher level perspective and seeing forward in the future. Taking into account every little thing, every little byte, and making that work, and that’s how I’ve been training the engineers right now. I hope that more of them will get these higher level views and it’s not just contingent on me, continuing to make the whole project stronger.
LL: The technology you’re developing is valuable. It obviously takes a lot of time and energy to achieve them. It’s publicly known since you made it open source. What were the reasons behind it? Just give it to the people?
Colin: Give it to the people. Also, others could see and find things which were missed or could be improved. Essentially, getting more collaborative development going on it. It’s also the basis of the TAO. It’s in a repository LLL/TAO. It’s there to be improved upon, it’s there to be modular so that you can expand upon it.
Eventually, you may be able to combine a Lower Level Protocol and a Lower Level Database and replace it back in service cluster that runs MySQL, and scale it even easier. Because MySQL is something that doesn’t scale very easily. Get 2,500+ bits per second if you use this kind of caching. That’s why it’s called the lower level labyrinth, it’s a temporal labyrinth.
It’s a series of base templates that you encapsulate those templates with parent classes, the child base classes and you basically can create any type of protocol from them. You can create any type of messaging system, you can create a database, any type of interpretation.
LL: It’s obviously going to be very quick, or I should say it is very quick. Speed is of a very important topic in the crypto space. Ripple is very fast cryptocurrency, but it is still orders of magnitude, lower than VISA, with their transactions per second number. Where does that put Nexus at? Do you think it could someday be used for making small purchases without having to wait too long for the confirmation like it’s the case with Nano?
Colin: That’s definitely the plan. Right now, it’s 75-100 transactions per second at our current capacity, still utilizing the linear blockchain. With the multi-dimensional blockchain, the idea is it being linearly scalable. This means, as you increase your number of nodes, you basically reach a point where you expand out the network so that the capacity can increase with more hardware.
Right now, the software is the biggest bottleneck, since everybody has to run the same software and you have to have O(n2) routing. This basically means it’s exponential time routing. This is something IETF guys talk about. They’re not too fond of blockchain which is fondling into a very controversial area with all these old-school engineers – that’s never going to work.
That’s something Dino and I are focusing on. We’ve got to make it cluster, and we’ve got to have it partitioned, essentially broken up by a parallel processor.
Computers right now don’t run one single process that doesn’t scale. Network routers don’t route one packet at a time – that doesn’t scale. The idea is with this architecture that as you increase your node count, the scale should increase, which means you just have to add more hardware to it.
Regarding the claims on the number of transactions a second Nexus will do… No idea yet.
We’ll find that out. I don’t make claims until we’ve actually get proven results. That’s why I never made claims that the LLD does this many [transactions] per second. Here’s test results, here’s the code, you can run that code directly from Github to verify results yourself.
And that’s the idea, not saying we have everything figured out, but here’s the effort we’re putting forward to think differently and engineer new types of technology and make something that works.
LL: Back to VISA. Are you planning to be better than them when it comes to the transaction speed?
Colin: Yeah, I mean to have the capacity of them at least is definitely something that needs to be there, I think VISA’s capacity is maybe 5,000…
Colin: 24,000 peak, average is about 2,000-5,000. The biggest problem though is your data requirements. When you get that many transactions per second, you can end up with gigabytes per day or per hour. That’s where you have the linear scalability issue. How do you partition this data out, so not every node has to have all the data of every other node.
You end up with hundreds of gigabytes just for bitcoin, and then you have to download this whole entire history. How do we create a trust model that allows the need to be incentivized to not manipulate the data but also have certain levels of locks that could be easily verifiable if somebody gives bad data.
LL: Nano’s (previously Raiblocks) transaction speed is close to instant. Is there any way you could implement their solutions into Nexus?
Colin: In their solution everybody has their own blockchain and your blockchain is a series of those. You could consider signature chains like that, but re-keying every single time, they don’t necessarily have as much of a global stake. They have a voting system that goes through multiple layers of voting if there’s ever a transaction conflict.
Then the network can decide what is a more valid transaction or not.I usually say Nano is equivalent to L1 locks on Nexus. L2 and L3 locks are going to give us a greater idea of security and still maintain a global stake. That there could be possible security ramifications on that.
I don’t necessarily see it as being a global blockchain that’s regulated by any incredibly secure means, with the owners having a lot of the stake. We’re not going to ruin this type of system so we can still maintain the value of it. It seems more protocol based, same with hash-craft, they’re going to have some challenges with the logical crop.
I see this balance between hardcore security and very slow, aka Proof-of-Work Bitcoin, and super-super-fast but making security sacrifices. We’re working on making that somewhere in the middle, because that gives you that usability but it also maintains integrity in the system. The levels of blocks are meant to be checks and balances, with one another, that require different resource inputs. That’s why we’re doing it that way, so you don’t isolate something completely here, or isolate something completely there, you can get the best of both worlds in that balanced position.
LL: Have you considered introducing two types of transactions? Dash has a solution where you can make a normal and an instant transaction. The instant transaction has a bigger cost to it. Did you have an idea for introducing different types of transaction and thereby transaction fees?
Colin: Instead of being a different transaction, InstantX with Dash is a protocol level locking. This basically means, I sent this transaction out, the master nodes verify this transaction and take that input which is a hash. Then they lock that to the transaction ID, which means if somebody tries to spend that same input again, that transaction ID is going to be different.
You’re basically going to have a lock conflict and then that invalidates InstantX transaction and it puts it to a miner to verify the correct transaction. What I mean by a protocol, is it’s just done over the network. They’re sending packets to other nodes, talking to them, sending data to them, saying, “this is what I’ve witnessed.”
We’re doing it the following way: with the L1, L2 and L3 locks you’ll be able to choose your level of security. Just like you can choose the number of confirmations. An L1 lock has been designed to be something very close to instant which is: protocol message is sent out, it’s organized, it’s set, it’s witnessed, it’s put into a bucket.
Then that’s transmitted to the stakeless L2 locks. These will give it more security because they’re linking it horizontally. That way you create parallel processing channels, state channels. They can increase in size, they can ex-tantiate more channels when there’s more requirement of it. Then these channels can still be linked together and verified by a secondary, and then third obviously is the miner, which verifies it deeper.
If I was buying coffee, the merchant would most likely assume for a better service to the customer, a slight risk. I may be able to attack them, or to create a type of double spent scenario that could cost them a slight amount of money. Then again, they’ve already had that happen to them with chargebacks. If there’s ever a chargeback the merchant already has assumed that, so it’s something similar.
If I want it faster I assume more risk. If I want to wait longer, then I get higher security that that transaction is then completely confirmed and completely on chain. I’d just say L1 locks, you buy some bubble gum, you buy some coffee, L2 locks, you’re selling a car, L3 you’re buying a house or sending $1million. So the idea with the L1, L2 L3, is that time is your cost.
LL: Your FAQ currently mentions that a transaction fee for Nexus is 0.01 NXS and one day it will be free when inflation kicks in. If Nexus goes to up to maybe $10 or $100 would it still be at 0.01 Nexus or is it planned to get lower?
Colin: So the Tao – Tritium, Amine, Obsidian – that’s a series of three major global consensus updates. I’m not going to say “hard fork” because people completely misassociated the word.
The idea is to slowly reduce those fees and eventually make the fee more like an IOTA style – slight proof of work, you throttle your transactions.
Because the transaction fee is really served right now to pay for miners when the currency completely deflates. We have a slight inflation model so we don’t have to worry about that. Another reason is to prevent dust spam attacks which is spamming the network which makes it cost a certain amount of money. That doesn’t work too well, as we’ve seen with a lot of dust spam attacks with Bitcoin.
We’re phasing out the transactions and as we’re phasing in the 3DC, we’re doing it very slowly and methodically too. We want to make sure everything works as it should. So with the Tritium update the plan is to reduce that, because a fixed number on your transaction fee isn’t going to work too well, because as the currency gets more and more valuable, it costs more and more.
That’s something of a predicament that Ethereum is in. You have a gas price which keeps increasing. New currency and you’re value of ETH gets higher and higher, you get more and more problems.
LL: How did you come up with the maximum supply of Nexus?
Colin: I wanted there to be enough, but not too much to make it worthless.
I figured 21 million was a little low. I didn’t necessarily want to have a skyrocketed valuation but have enough to have that psychological validation.
Obviously, the point was not to make it incredibly deflationary. The decision I made was to model it slightly off of gold, which was always mined, (it was always coming into existence). I’ve done some research (on gold) and based my number on that.
Mining Nexus also creates a completely new industry. The validators can help people in third world countries. They’re running, (validating locks / running validations on locks) on their phones. That can help people building their trust up, and getting a higher stake rate and earning money that way.
There are people that are mining which is generating slight inflation. They are earning money to eat, sometimes making even more money than at their day jobs. I think that’s a beautiful thing too when we hear comments like that. Those are the types of impacts that we strive for.
LL: In your newsletter, it says that the satellite you’re going to launch will be purchased using cryptocurrency. Will it be NXS?
Colin: Yes, we’re planning a mix of NXS and BTC. They are first time crypto-purchasing too, but we’re helping them out. It’s good that they’re putting some skin in the game too, so we’re paying for the satellite with crypto.
LL: This next question actually comes from one of readers (and it fits perfectly): With Qtum already launching a satellite into low earth orbit, Elon Musk wanting to launch satellites to provide internet access and Walton chain working on a strategy to utilise a network of low earth orbit satellites to complement their IOT and internet anywhere strategy to the world, how is Nexus going to differentiate itself and stay relevant? Is there an opportunity here for partnerships and the different projects complimenting each other?
Colin: We are taking a different approach then some of these other projects, so as I see it there is plenty of space in low earth orbit. But on another note, we are always open to collaboration with other projects that align to the same vision as us: creating an inclusive and open system.
LL: Can you elaborate more on the use cases for NXS?
Colin: Yep, in order to not reveal too much… We definitely have more use cases than just paying for rockets [laughter]. There’s obviously the experimental phases of finding the most practical ways, but that’s a first step. We’re looking at some other really interesting uses, especially seeing the virtualization platform, it’s very symbiotic with cryptocurrency.
LL: You’ve already mentioned cooperation with other companies, how about SingularityNET? What role are they playing?
Colin: It’s a dual use. They’re going to experiment with their AI systems and their AI agents in the trust system, developing the trust system – having the trust is a form of intelligence.
Another one is checking out how our contracting engine is coming together, how that can be deployed into SingularityNET. How it can help them scale with their current limitations.
Our idea is, we’re very aligned on our principles of what artificial intelligence can do. We’re creating a collaborative economy too, instead of saying “AI’s are bad, they’re going to take over the world,” why don’t we teach AI’s to be good and teach ourselves to be good…
LL: AI is indeed a controversial topic.
Colin: Yeah, and making them safer is a big thing too. When you make AI safer by programming certain rules into Nexus chain, it becomes the distributed computer out of the central computer, which the central computer of the robot started killing everybody [laughter].
AI is big potential future. Space is another huge future. Blockchain is another huge future. By linking those three together, you can make a very powerful combination that’s culminated and synergistic.
We’re still exploring the uses and I don’t think we have all of them yet. They’re in the ramping up stages, we’re in the ramping up stages with developers.
We’ve got a communication channel going with developers from Telegram. It’s yet to be completely determined, it’s speculative right now what exactly we’re going to be able to do. Over the course of this year, we’re going to really start to see a lot of these applications put into practice.
LL: Do you consider a potential collaboration with some other projects in crypto space or outside of it?
Colin: Oh yes, always. I think competition has its merit, but cooperation really has greater merits. I’ll use Metcalfe’s law, which states that the value of a total communication system is based on the square of its participants.
Now that’s proven science that actual value is interesting enough, it’s exponential growth. So, you have 4 people squared competing against 4 people squared, that’s 16 against 16, which together is 32. But if you have 8 people together, you’ve got 64, it’s twice as strong. Cooperating with people that just want to corrupt it, definitely is something we need to use very correct discernment on.
But cooperating with the people that are aligned – presents plentiful opportunities.
I believe creating a collaborative environment in such a hyper competitive industry may do a lot of good. We’re always open to finding other people that know and have specific focus and finding ways of connecting them together, where a connection should lead to more things.
LL: Great! That would be all of the questions we’ve prepared for you. Thank you for your time guys, it was fantastic talking to you.
Colin and Alex: Thanks for having us guys, it was a real pleasure.
* * *
Have you enjoyed our interview with Colin Cantrell and Alex El-Nemer? Be sure to subscribe for more.
Get Notified When We Publish
Full Reports And Interviews