(link) Shared Spectrum, and Mark McHenry in particular, gets a nice writeup in the WaPo. The politics of unlicensed white space also comes up.
(link) IET Workshop on SDR and Cognitive Radio. It’s in London on the 18th, so I won’t be there. But Keith from CTVR will be. *UPDATE* In the comments, Keith notes that some of the presentations may appear as webcasts at this site.
(link, pdf) On Monday, Oct 27 E3 and the SDR Forum will host a joint workshop on business, exploitation modem architecture, regulation and standardization aspects of SDR and CR. I’m tentatively slotted to give an outbrief on the SDR Forum’s contribution to an ITU report on “Cognitive radio systems in the land mobile service.” (lots of interesting stuff to cover in 30 minutes)
(link) Cognitive radio got some love at the Intel Developers’ Forum. I don’t see the talk in the catalog though.
Joe Mitola gave the keynote address this morning at VT’s Wireless Symposum entitled “The Future of Cognitive Radio.”
My brief notes are below the fold. Read the rest of this entry »
Though it’s on robotics, I thought several portions of this Discover article on Artificial Intelligence were relevant to cognitive radio. Excerpting with my CR-focused comments below the fold. (emphasis mine, links are Discover’s)
Read the rest of this entry »
In this post, Keith attempts to place cognitive radio on Gartner’s Hype Cycle and notes that SDR is emerging from the “Hype Trough” with actually useful SDR products now coming to market and posits that cognitive radio is near the peak of inflated expectations as evidenced by the large number of CR conferences.
If it’s not too indulgent, I’ll both agree and disagree with Keith.
If you consider cognitive radio to be the “magic black box” that will solve all of wireless networking problems (snicker, but that’s not an uncommon sentiment and one which I think is consistent with an assumption of embedding true AI into a radio), cognitive radio will most definitely follow Gartner’s cycle. It’ll be years before we have the cheap computational power and software processes necessary to realize the required artificial intelligence. In the mean time, cognitive radio will be dramatically overhyped, which when the hype is not quickly realized, make most people pessimists on the technology which will induce a hype trough. Eventually, however, AI will be embeded in your radio (likely shortly after the Singularity) and the stages of Gartner’s hype cycle will be complete.
However, if you consider cognitive radio to be a shift in the wireless networking design process to one that allows design decisions to be made by “intelligent” devices post-deployment, then I don’t think Gartner’s cycle will apply. The emergence of actual SDR noted by Keith will (and in some cases is) dramatically shorten the transition time from algorithm conception to deployment. Thus when researchers conceive of an intelligent algorithm consistent with the cognitive radio design paradigm, we’ll be able to almost immediately transition it to productive realizations. Of course the better coupled an organization’s algorithm design and testing processes are with its deployment processes, the faster the transition from concept to productive implementation will be.
For example, consider Dynamic Spectrum Access (DSA) which is certainly a long ways from a realization of cognitive radio the magic black box, but is an example of the cognitive radio design paradigm. DSA (while we’re still researching it!) is already being standardized in 802.22, 802.11h,y, and 802.16h. Likewise other realizations of the cognitive radio design paradigm (edge security, intelligent RRM, cognitive routing…) should also move so quickly from conception to implementation that neither the hype peak nor the hype trough will have time to build prior to productive deployments.
So, I’ll agree with Keith that cognitive radio as ”artificial intelligence embedded in a radio” will most definitely follow Gartner’s hype cycle. However, there’s another deployment path for cognitive radio wherein envisioned cognitive radio capabilities are deployed as a series of intelligent algorithms incorporated into radios. The transition time for these algorithms will be much shorter because the goals are much more manageable. Further, other trends in the wireless world (such as the emergence of SDR from the hype trough) will so shorten the transition period that the hype bubble (peak and trough) will not have time to build prior to deployment.
Here. Basically, they’re looking to make a computer-controlled avatar that could pass the Turing test. While there’s not a lot of technical depth there, from a CR perspective there’s some insights that can be made from the following excerpts.
Mimicking the behavior of a human-controlled avatar in a virtual world like Second Life is possible, according to Bringsjord, if you craft the necessary algorithms carefully and run them on the world’s fastest supercomputer. Bringsjord’s synthetic-character software runs on the supercomputers at CCNI, which together provide more than 100 teraflops, including a massively parallel IBM Blue Gene supercomputer (the title-holder to world’s fastest supercomputer), a Linux cluster-supercomputer, and an Advanced Micro Devices Opteron processor-based cluster supercomputer.
Rascals is based on a core theorem proving engine that deduces results (proves theorems) about the world after pattern-matching its current situation against its knowledge base. Each proven theorem then initiates a response by virtue of having a synthetic character speak and/or move in the virtual world.
“Upon analysis, anything that our synthetic character says or does, is the result of a theorem being proven by the system,” said Bringsjord. “So far, theorem provers have only been used in toy-problems. We are scaling that up to enough knowledge for a synthetic character, which requires a very fast supercomputer.”
Bringsjord’s research group recently passed a milestone by programming a synthetic character to understand a “false belief.” For instance, to create a false belief you could hide a stuffed bear in a cabinet in front of a child and an adult, and then when the adult leaves the room, move the bear to a closet while the child is still watching. Here, the child should know that the adult now has a false belief–that the bear is still in the cabinet.
In general, we (the CR community) will cannot assume our radios will have their own supercomputers (at least for a decade if not longer).
- Our radios’ run-time adaptations will need to be determined by far simpler algorithms than what is being currently explored in the AI community
- This could be supplemented by case-based reasoning to choose algorithms
- Attempts to automatically define solutions for completely novel problems should likely only be handled during off-line processing (to establish cases for online processing).
- Even then, it will likely be necessary to restrict our reasoning algorithms to scenarios analogous to the “toy-problems” alluded to in the preceding. In practice this means highly restricted reasoning domains.
- Since we shouldn’t be trying to make the 100% solution in our first generation of CR (or even the second or third generations), we can and should be constantly looking for ways to “cheat”. The AI community has to infer what people are thinking. For inter-radio reasoning, one radio could just ask / download what the other radio knows.