Roadmap to all my technologies
This FB post is written 7 years ago on December 21 2012 and is now coming true.
"God gave me a vision, I walk into a room which has an interface, by using voice commands and hand/body movements, I could entirely control a computer so advanced that everything in the place will respond to my commands, and the computer is like a human being that will respond like Apple Siri, but it is so very complicated that is linked to an outside world that everything can be also controlled, the future of a car that can also be controlled by the opening of a door to a driverless car that can fly, and can navigate and maneuver with ease and dock, an aircraft that is powered by a reactor that can fly almost many times that of a rocket engine without the need of an airspace, and great maneuverability it can move at great speeds in all directions in an twinkling of an eye, and I look at the time clock, it is at least 100 years into the future."
Voice Recognition, Natural Language, Gesture Controls, Brain Interface with thought controls will be the last stage where you can use your thoughts to control anything in your environment. How can you tell your environment? Embedded sensors from GPS to computer vision, Facial recognition, voice recognition, 3D AR vision with multiple cameras, radar, sonar, will create a "3D vision" into everything like self driving cars to flying cars that has perfect controls in the air, to ship's navigation system in the oceans above and below the waters, even terrian above and below the ground can also be mapped. With Artificial Intelligence and machine learning, precise and autonomous controls is perfected where all decisions on a fighter aircraft can also be controlled and targeted using highly sophisticated algorithms with multiple control interfaces. Designs of the future like AutoCad and 3D modelling where now Nvidia is heading is making AI and machine learning the present, it is just we had not intergrated AR into the presentation. Big data where all kinds of dataset can be imported to the software, where you can toggle any views you want, even creating hundreds of views from 3D to charting. The last but not least I will merge all my technologies to create the Next Gen Neuromorphic Super AI computer that will put HFT to shame, linking the entire world exchange together with Next Gen Data transmission which is 1000 times the present technology. I have given everyone 5.5G but 6G will be totally different as I merged with blockchain. which means that I have to merge all technologies and create the Next Gen Internet. You can train your AI and machine learning to solve any problems with great accuracy like what Boston Dynamics has done to their robots. PERFECTION is an ART.
I will create the Next Gen Internet by merging all these technologies to create the BLUEPRINT.
The FUTURE is RISC-V, Pine64 clusters on Next Gen Blockchain, Software acceleration via Distributed nodes on Edge, using the Ubuntu Linux features and software packages to manage APIs and Algorithms for Intelligent OS(Version 1.0)
Now on the TCPIP networks, you can still use the client/server, but with new gen Blockchain, everything will be customised for a distributed node, a network where software acceleration will merge the CPU, storage, secured database, and 1/O speeds beyond 5G, when beyond the HPC with millions of threads for the merger of AI and ML, and Pine64 Clusters will be utilized in every distributed node for both a proprietary and Open source OS, the same as an Open or Private network on Blockchain. So a control stack will sit on top of every node, where AI, ML and Algorithms will automate every processes like load balancing and tasks and I/O like a Unix system, and control the software stacks for every APIs, creating the intelligence for an Intelligent OS. So technically there is no controls on the OS you use as long as you install the Controls (which is dedicated for different OS) and a new browser 3D interface, which also controls the packages or apps depending on your OS on your software stack, which will be unhackerable with a secured database and new transport protocol which is in-built in the browser. The next gen storage will be a 3D storage that expand to 8D, and a new material from nanotechnology that you can create limitless amounts of millions of tyrabytes of storage beyond SSD, and the architecture will drastically change without the use or need of temporary memory like the DRAMs, it will be a cache on the storage to create massive i/O speeds and unlimited threads.
I will start and offer the roadmap of this incredible difficult project with proof of concept and white paper before the end of 2020. I do not believe in wasting resources like the fragmentation of Bitcoin, and I want to get it right so it will last for the next 100 years, when someone greater than me comes along. In order to minimise the risk of exposure of about 10 years term until 2030 I will get a life term insurance policy against death starting from S$1 million up to S$100 million and will lapse on the expiration of my project. Normal shareholders will have voting rights but I intend to hold at least 51% of this company and only sit at the Board. I am so confident of achieving all my goals and I will have no lack of investors as I have already proved that I am an expert in Technology and the Economy, and Investments, and predicting the future, upon completion of this project I can even fund IMF/World Bank and set the Economy of Abundance. By that time I will turn my Non profit into a Foundation and create assets and income so that it has the means of lasting forever when global poverty is solved.
Breaking News : The Next gen Internet 5.5G is already here and 6G on it's way : Using Sound over Multicore Fibre travelling beyond the speed of light
That means I can use this technology to piggy back existing Fibre connections to even create speeds beyond 100 to 1000 to 10,000 times the present technology and capacity. I can modify existing 5G networks into an OPEN or CLOSED network, where police, security and medical personnel can use the OPEN network, and the rest of the world to use the CLOSED network, even 5G wireless routers can support this feature, when the technology of using sound to piggyback on existing fibre can increase your capacity up to 10000x and 100000x times, all it takes is a slight modification which can be tested and implemented in months, before we implement our 5G networks in 2020. I can even use a VPN service to secure this OPEN network, which most countries do not implement this layer of security in their 5G networks. You can use existing 4G Fibre networks to upgrade to 5G. All you need to do is make sure your new 5G equipment conforms to the spectrum you are allocated, which can be easily customised to handle both OPEN and CLOSE, as it is unlikely to exceed your capacity, even for Wireless routers. Contributed by Oogle.
Scientists have perfected a new technology that can transform a fibre optic cable into a highly sensitive microphone capable of detecting a single footstep from up to 40km away.
Guards at listening posts protecting remote sensitive sites from attackers such as terrorists or environmental saboteurs can eavesdrop across huge tracts of territory using the new system which has been created to beef up security around national borders, railway networks, airports and vital oil and gas pipelines.
Devised by QinetiQ, the privatised Defence Evaluation and Research Agency (DERA), the technology piggybacks on the existing fibre optic communication cable network, millions of miles of which have been laid across.
Trials have already been staged in Europe to use the OptaSense system, which evolved out of military sonar and submarine technology, on railways to prevent vandals or thieves trespassing on high-speed lines as well as to counter terrorism. It has been deployed by several blue chip oil companies to protect energy pipelines which run through some of the most lawless and remote regions of the world.
Oil and gas companies lose millions of pounds each year through “hot tapping” in which thieves siphon off oil to sell. The process can be dangerous, resulting in explosions which have claimed hundreds of lives as well as causing serious environmental damage. Its creators say the system can also safeguard against accidental damage caused by builders and farmers working close to pipelines in Europe and North America. But it is hoped the technology will be rolled out to enhance security arrangements at prestige sites, among them Heathrow’s Terminal 5 or the Olympic Games and to protect major gatherings of world leaders such as during the G8, which has become an increasing magnet for protest movements.
Darpa has a project that protects undersea oil pipelines with sound sensors that can listen to any meddling of it's oil pipelines via fibre optics. This technology can be modified to piggy back on multicores fibres using Sound to travel beyond the Speed of Light for transmission of huge data beyond 255Tbps and will be the Next Generation Internet beyond 5G which the secret lies in the media to facilitate this transmission.
How did the researchers at Eindhoven University of Technology (TU/e) and University of Central Florida (CREOL) do it? Multi-core fiber, of course! As it stands, the entire internet backbone consists of single-mode glass and plastic fiber. These fibers can only carry one mode of light — which, in essence, means they can only carry the light from a single laser. (It’s a bit more complex than that, but it’s beyond the scope of this story to explain it any further.) You can still use wavelength division multiplexing (WDM) to push insane amounts of data down a single fiber (a few terabits), but we will eventually run up against the laws of physics.
Multi-core fiber — literally a strand of optical fiber that has multiple cores running along it — allows for multi-mode operation. It has historically been hard (and costly) to make high-quality multi-mode fiber, but it seems those barriers are finally starting to fall. In this case, the TU/e and CREOL researchers used a glass fiber with seven individual cores, arranged in a hexagon. They used spatial multiplexing to hit 5.1 terabits per carrier, and then WDM to squeeze 50 carriers down the seven cores — for a total of 255Tbps. This wasn’t just a short-range laboratory demo, either: The multi-mode fiber link was one kilometer (0.62 miles) long. [Research paper: doi:10.1038/nphoton.2014.243]
(The image at the top of this story is DARPA’s multi-core photonic-bandgap fiber — not the seven-core fiber used in the research discussed here.)
Eventually, multi-mode fiber will most likely replace the internet’s current single-mode backbone — but considering such an upgrade would require millions of miles of new multi-core cabling, and lots of new routing hardware to handle the multi-mode connections, we’re talking very long-term here. Still, with internet traffic continuing to grow at an alarming rate — mostly fueled by the popularity of streaming video, and smartphones and tablets bringing billions more people online — it’s nice to know that we now have the necessary technology to make sure that we don’t run out of bandwidth any time soon.
Similarly, the Spectrum can used to piggy back the transmission of huge data using Sound to travel beyond the Speed of Light using technologies beyond TDD for Intelsat satellites that has not yet been invented.
Using an improvised Storage Class Memory (SCM) is a newer hybrid storage tier so you do not need any NAND memory in future on your systems, forming part of a huge cache on your SSD
Storage Class Memory (SCM) is a newer hybrid storage tier. It's not exactly memory, and it's also not exactly storage. It lives closer to the CPU and comes in two forms: 1) traditional DRAM backed by a large capacitor to preserve data to a local NAND chip (for example, NVDIMM-N) and 2) a complete NAND module (NVDIMM-F). In the first case, you retain DRAM speeds, but you don't get the capacity. Typically, a DRAM-based NVDIMM is behind the latest traditional DRAM sizes. Vendors such as Viking Technology and Netlist are the main producers of DRAM-based NVDIMM products.
The second, however, will give you the larger capacity sizes, but it's not nearly as fast as DRAM speeds. Here, you will find your standard NAND—the very same as found in modern Solid State Drives (SSDs) fixed onto your traditional DIMM modules.
This type of memory does not register as traditional memory to the CPU, and as of the DDR4 specification standard, modern motherboards and processors are able to use such technologies without any special microcode or firmware. When the operating system loads on a system containing such memory, it isolates it into a "protected" mode category (for example, 0xe820), and it won't make use of it like standard volatile DRAM. Instead, it will access said memory only via a driver interface. The Persistent Memory or pmem Linux module is that interface. Using this module, you can map memory regions of these SCM devices into userspace-accessible block devices.
Current applications use SCM for in-memory databases, high performance computing (HPC) and artificial intelligence (AI) workloads, and also as a persistent cache, although it doesn't have to be limited to those things. As NVMeoF continues to mature, it'll allow for you to export SCM devices across a storage network.
I have done research on this and have identified that in the future, you do not need to have temporary memory(NAND) to sit beside your CPU and storage, using SCM as a huge cache which is unlimited based on your storage, everything can reside on your SSD. The challenge is how to optimised this architecture so you will not be limited by your temporary memory. Contributed by Oogle.
Beyond SSD: How A Discovery At MIT May Lead To The Next Generation Of Data Storage
Assistant Professor Beach and his multi-institute team have applied years of work to finally elucidating the process by which engineers can generate and control skyrmions. The superior attributes of these virtual particles could give rise to a virtual storage platform offering faster, denser forms of memory than conventional magnetised drives can deliver. This project, which was co-funded by the German Science Foundation and the U.S. Department of Energy, could go down as the foundation of next-generation data storage and RAM solutions.
I will make improvements in WiFi 6 spectrum for WiFi 7 using Intelligent AI Algorithms
According to IDC, 802.11ax (Wi-Fi 6) deployment is projected to ramp significantly in 2019 and become the dominant enterprise Wi-Fi standard by 2021. This is because Wi-Fi 6 will deliver faster network performance and connect more devices simultaneously. Additionally, it will transition Wi-Fi from a ‘best-effort’ endeavor to a deterministic wireless technology that is now the de-facto medium for internet connectivity.
With a four-fold capacity increase over its 802.11ac (Wi-Fi 5) predecessor, Wi-Fi 6 deployed in dense device environments will support higher service-level agreements (SLAs) to more concurrently connected users and devices with more diverse usage profiles. This is made possible by a range of technologies that optimize spectral efficiency, increase throughput and reduce power consumption. These include BSS Coloring, Target Wake Time (TWT), Orthogonal Frequency-Division Multiple Access (OFDMA), 1024-QAM and MU-MIMO.
In this article, we’ll be taking a closer look at BSS Coloring and how Wi-Fi 6 wireless access points (APs) can utilize this mechanism to maximize network performance by decreasing co-channel interference and optimizing spectral efficiency in congested venues. These include high-density environments such as stadiums, convention centers, transportation hubs, and auditoriums.
What is Basic Service Set Coloring?
Legacy high-density Wi-Fi deployments typically saw multiple access points assigned to the same transmission channels due to a limited amount of spectrum – an inefficient paradigm that contributed to network congestion and slowdowns. Moreover, legacy IEEE 802.11 devices were unable to effectively communicate and negotiate with each other to maximize channel resources. In contrast, Wi-Fi 6 access points are designed to optimize the efficient reuse of spectrum in dense deployment scenarios using a range of techniques, including BSS Coloring.
This mechanism intelligently ‘color-codes’ – or marks – shared frequencies with a number that is included within the PHY header that is passed between the device and the network. In real-world terms, these color codes allow access points to decide if the simultaneous use of spectrum is permissible because the channel is only busy and unavailable to use when the same color is detected. This helps mitigate overlapping Basic Service Sets (OBSS). In turn, this enables a network to more effectively – and concurrently – transmit data to multiple devices in congested areas. This is achieved by identifying OBSS, negotiating medium contention and determining the most appropriate interference management techniques. Coloring also allows Wi-Fi 6 access points to precisely adjust Clear Channel Assessment (CCA) parameters, including energy (adaptive power) and signal detection (sensitivity thresholds) levels.
Maximizing Performance in Congested Environments
Designed for high-density connectivity, Wi-Fi 6 offers up to a four-fold capacity increase over its Wi-Fi 5 predecessor. With Wi-Fi 6, multiple APs deployed in dense device environments can collectively deliver required quality-of-service (QoS) to more clients with more diverse usage profiles. This is made possible by a range of technologies – such as BSS Coloring – which maximizes network performance by working even within heavily congested, co-channel interference environments. From our perspective, BSS Coloring will play a critical role in helping Wi-Fi evolve into a collision-free, deterministic wireless technology as the IEEE looks to integrate future iterations of the mechanism into new wireless standards to support the future of Wi-Fi and beyond.
Interested in learning more about 802.11ax? Read the related articles below:
The Target Wake Time mechanism first appeared in the IEEE 802.11ah “Wi-Fi HaLow” standard.
Published in 2017, the low-power standard is specifically designed to support the large-scale deployment of IoT infrastructure – such as stations and sensors – that intelligently coordinate signal sharing. The TWT feature further evolved with the IEEE 802.11ax standard, as stations and sensors are now only required to wake and communicate with the specific Beacon(s) transmitting instructions for the TWT Broadcast sessions they belong to. This allows the wireless IEEE 802.11ax standard to optimize power saving for many devices, with more reliable, deterministic and LTE-like performance.
As Maddalena Nurchis and Boris Bellalta of the Universitat Pompeu Fabra in Barcelona noted in a recent paper, TWT also “opens the door” to fully maximizing new MU capabilities in 802.11ax by supporting the scheduling of both MU-DL and MU-UL transmissions. In addition, TWT can be used to collect information from stations, such as channel sounding and buffers occupancy in pre-defined periods. Last, but certainly not least, TWT can potentially help multiple WLANs in dense deployment scenarios reach consensus on non-overlapping schedules to further improve Overlapping Basic Service Set (OBSS) co-existence.
The City of the Future will be run by AI, Machine Learning, Blockchain on Next Gen Internet
Networks of the future will be Blockchain and Li-Fi that can support 6G, where spectrums will be extended to everywhere there is electricity on power grids, lamp-posts will be smart with cameras, Li-Fi, that can support wireless charging anywhere, with speeds beyond your wildest imagination, where traffic management systems control both traffic on land and air, everything including traffic lights, cameras will be linked to AI for controls, from autonomous cars, buses, trains and flying cars and taxis, homes will be built in the sea on supporting platforms, and man will travel to Mars to colonise it, all these will happen before 2050. Li-Fi has the potential to scale massively with more cells per square inch and maximising the spectrum.
The Next Generation Interface for Voice Controls
The technology is already here. You can use Text to voice to convert everything to voice. Linking it to a dictionary of keywords to make the computer understand different languages, male and female voices and their emotions, even programming commands can easily be adapted with machine learning, everything is intergrated with algorithms where this interface can be used to control everything in a computer. We have now Google Assistant, Siri and Alexa, and soon everything can be linked to control everything you use in software like using a voice command to read your emails, ask for the weather etc. Most technology today is not able to properly linked voice activation to all applications. The same technology you use for a DeepFake, you can divide the video and the sound, and recreate or merge with other samples, to create anything you want. I can even create programming codes into sound, even creating malware or hacking tools, there is no boundaries I can think of, only I am not releasing this to the public, or else the damage will be great. AI algoritms will also make spiders so intelligent it will be able to identify images, split videos into images and voice and identify it's contents, even support transcribing voice into text and translate languages. This is the missing link everyone is looking for, and those who invested in DeepFake technology already know it is now possible. Welcome to the future of technology.
Intelligent OS will handle processes similar to https://veryfast.io, handling millions of threads this way
I have designed the Intelligent OS to follow the distributed network concept with a stack similar to the above to handle the cpu, processes, threads, memory and even virtual memory and everything is automated and controlled by AI using algorithms. This concept follows the mainframe of a Unix system but everything like load balancing, the control of processes and threads, memory, applications which algorithms will run and what applications to run do not need an administrator. The features will be similar to Ubuntu Linux where a software stack will sit below the control stack and it does not matter which OS you use or database you use except it will be encrypted, it can be compatible but take note this will only be on the next generation Blockchain with new transport layer due to security issues. even the browser needs to be reinvented to handle HTTP 3, and be able to display 3D and virtual reality which I will solve in my last phase, If you understand everything you will get ready for the final phase on the completion of the browser design and adapt your technologies for a great change that will happen on the next generation Blockchain, where the TCPIP and Blockchain nodes can co-exists but may not be totally compatible. This new network will ursher in the Neuromorphic computers where the work of research and finding solutions will be greatly simplified, and one day even smarter and more capable than me.
1) All network scanner tools does not work as my Cloud servers only accepts connections from a particular port and IP from a particular client, others will not see anything and the server will be "invisible".
2) All sensitive data from Accounting, HR, CRM, collaboration softwares will be totally hidden from public view and totally no access.
3) So all third parties vendors I work with must understand my requirements first and allow me to implement all these modifications before I start using them.
4) Yes therefore the routing rules will be extremely complicated as I need to specify both incoming and outgoing rules, which ports and IPs I use which can only be done manually now but I will automate it in future.
5) Yes, I can even control the routes it uses thru hops of any networks but must be logically possible, even with VPN, which not many people in the market can do even they are IT pros, I do it by mapping every networks it passes thru, and decide which paths to take. So like Trusted websites, there will be Trusted networks, and once a successful path is taken it will stored and remembered for AI (New Technologies)., which algorithms can be used for intelligent routers, switches and the latest 5G equipments with a software update.
6) Clue. So if you understand all my technologies you will now understand all how my Intelligent OS will be merged with Blockchain 3.0 with new routing technolgies using Intelligent routing, NAT and merging with Mac addresses using Matrix nodes and IDs. As this is a distributed network using nodes there will not be a need for a DNS Server, all will be controlled by the transport layer of a distributed node. This technology has been working and tested by miners worldwide, only the design of the control stack and software stack has not been optimised for the RISC V Clusters and Intelligent OS and Blockchain 3.0 yet, but can be done very soon.
We are now seeing the light at the horizon. You will have a totally open source hardware and software solution that will explode to see the RISC V Clusters on a version of Ubuntu features that will use Intelligent OS on Blockchain 3.0 that will be a node on every iOT devices, that will share and transfer info instantly and seamlessly be connected in extreme speeds. We are going to see all these breakthru soon and I will be the catalyst. Now with the latest programmable CPU technologies adopted by Intel and AMD, where 3D stack and components that can be incorporated to the CPU, we are seeing customisable hardware solutions catching up with software where everything can be maximise for peak performance for HPC, even beating what the Quantum computer can do today. Yes, I intend to study all the features of GPU and Software acceleration, Edge capabilities for both Local and external data, AI image capabilities and NPU, NVME transfer speeds for cache, memory and virtual memory, to create a solution where everything is optimised in an architecture that is 100x beyond the capabilities of Apple Quantum computers or I cannot achieve what I want to do when I bring everything together to use Algorithms, AI and ML together beyond Blockchain 4.0
Step 1. NVIDIA Jetson Nano Dev kit to test GPU software acceleraation.
Step 2. Coral Dev kit to test Edge apabilities.
Step 3. Rasberry Pi 4B to test AI image recognition capabilities.
Step 4. Khadas VIM3: 4K NVMe SBC with NPU
Step 5. I will create my own RISC V on clusters to test indexing and presentation of 3D search capabilties on new 3D browser interface, optimising the control stack and later the software stack. And finally port to my platform of Neuromorphic Clusters and maximise to 100x what Apple Quantum computers can do.