000webhost

Web hosting

Monday, February 3, 2020

Web Server Global Sampling Scan/Enumeration Test Notes, Google Vulnerabilities, and More

Web Server Global Sampling Scan/Enumeration Test Notes:
- recently I came across a website which lists the IP range of countries around the globally and it sort of led me to some other thoughts
- more and more companies are blocking disposable email address signups because they want the data for tracking, marketing, on selling, etc... Luckily, there are more and more alternatives to Guerillamail now and there are a few that work against https://www.ip2location.com/ still (you need it for a token for automated downloads via curl, wget, etc...)
alternative to guerrillamail
temporary email redirect address
disposable email address
- I wondered whether or not the Internet could operate without DNS (a few core DNS servers have been attacked on a serious scale before but not to a level that could results in serious wide scale service degredation across the Internet)? Could you just enumerate IP addresses and see what you found just like in my DNS/AWS S3/Github enumeration script pack?
http://dtbnguyen.blogspot.com/2020/01/dnsamazon-s3github-enumeration-pack.html
- part of me wonders whether or not we should re-allocate things in the IP address space like in the Dewey Decimal system? That way even if DNS does down you still can enumerate and run things independently?
dewey decimal
Dewey Decimal Classification, also called Dewey Decimal System, system for organizing the contents of a library based on the division of all knowledge into 10 groups, with each group assigned 100 numbers.
https://www.britannica.com/science/Dewey-Decimal-Classification
https://en.wikipedia.org/wiki/Dewey_Decimal_Classification
- what I found was that it was the exact opposite this time. Basically, anyone and everyone is on the Internet now but most of them don't really care about proper configuration, setup, etc...
- I know this by doing a tiny sample sweep of several hundred servers across several countries and seeing whether any of them had web servers running and reminds me a lot of findings by eithers such as Shodan, Binary Edge, Netcraft, etc... and work that I previously done into this particular area (check my book on "Cloud and Internet Security")
binary edge
netcraft
- Shodan and Binary Edge seem to have multiple levels through which to make money. The free stuff basically does a very basic look to see what is out there. They then sell further services as you move up tiers. Obviously, since these are borderline automated penetration tests and commercial operations fees should be expected as well as limitations on total number of queries. The nice thing about these services are that they search for information leaks as well. No idea how well they work but it's easier then doing it yourself obviously
shodan pricing
- services such as Shodan and Binary Edge are very useful but you could probably build something just as useful yourself? The basic underlying tools seem to be FOSS (Free and Open Source Software). Combine some of my previous tools and you could build something like that easily
http://dtbnguyen.blogspot.com/2020/01/dnsamazon-s3github-enumeration-pack.html
- my gut feel is that at least some of these dodgy systems have to be valid Cyberwarfare Black Ops? Namely, they could be operations to create SPAM email, bot control systems and nodes, social media/propaganda bots, systems to help launch attacks or run recon on other systems, etc? When you realise how many dodgy systems are out there you also realise that if there was a concerted effort by a small group of skilled actors using advanced and custom tools to create trouble they could take down many parts of the Internet
Facebook still auto-generating Daesh, Al-Qaeda pages
https://www.arabnews.com/node/1556586/media
https://www.independent.co.uk/life-style/gadgets-and-tech/news/facebook-twitter-google-isis-daesh-internet-youtube-social-media-home-affairs-a7208131.html
https://en.wikipedia.org/wiki/Use_of_social_media_by_the_Islamic_State_of_Iraq_and_the_Levant
https://dtbnguyen.blogspot.com/2017/08/the-big-5-us-it-firms-arent-unbeatable.html
- in the past I've and during this particular experiment I looked at dodgy servers/IP addresses. It's clear a lot of them are legitimately dodgy, clear that part of them are honeypots (I've been working on more subtle honeypots), clear that some of them are for intelligence/information collection, etc... Don't be surprised if someone/something attempts to hack you (your system/s may start acting funny/differently from how they normally act) if you go snooping around some of systems. You're better off simply staying away from them if you want to stay secure/safe. As a bare minimum scope these systems out from an isolated system and/or network
mailbox validation script
https://www.scottbrady91.com/Email-Verification/Python-Email-Verification-Script
https://github.com/scottbrady91/Python-Email-Verification-Script
curl smtp server get banner
https://ec.haxx.se/usingcurl/usingcurl-smtp
https://www.hackingarticles.in/5-ways-banner-grabbing/
spam ip address list
https://myip.ms/browse/blacklist/Blacklist_IP_Blacklist_IP_Addresses_Live_Database_Real-time
https://zeltser.com/malicious-ip-blocklists/
http://iplists.firehol.org/
https://www.liveipmap.com/ipcomplaints?page=1&duration=onemonth
https://github.com/client9/ipcat
https://lite.ip2location.com/
open smtp server pastebin
firehol ip address list direct url
https://www.blocklist.de/en/index.html
https://docs.danami.com/juggernaut/user-guide/ip-block-lists
https://forum.mikrotik.com/viewtopic.php?t=152632
https://github.com/ktsaou?tab=repositories
https://weberblog.net/palo-alto-external-dynamic-ip-lists/
open relay smtp ip address list
Open Relay Database Servers keep lists of known or suspected IP addresses that try to relay mail through unauthorized mail servers on
 the Internet. Here are some examples of ORDB servers:
inputs.orbz.org
outputs.orbz.org
relays.ordb.org
orbs.dorkslayers.com
dev.null.dk
relays.osirusoft.com
bl.spamcop.net
relays.visi.com
smallest virtual machine
https://wiki.freepascal.org/Small_Virtual_Machines
http://mikelev.in/ux/
https://github.com/miklevin/
tiny core linux
http://tinycorelinux.net/
https://wiki.freepascal.org/ReactOS
https://reactos.org/
https://sourceforge.net/projects/reactos/files/ReactOS/
vde2 vlan
virtual router vm qemu
https://github.com/rendoaw/virtual-router-with-qemu
http://www.linux-kvm.org/page/Networking
qemu honeypot
https://www.honeynet.org/tag/qemu-d52/
http://securitytools.wikidot.com/honeypot-utilities
https://www.honeynet.org/category/honeypot/page/2/
http://www.blackalchemy.to/project/fakeap/
http://www.few.vu.nl/argos/
https://github.com/cowrie/cowrie
http://securitytools.wikidot.com/plotting
https://github.com/paralax/awesome-honeypots/blob/master/README.md
https://0wned.it/2016/07/30/creating-a-highly-interactive-honeypot-with-honssh/
https://embedgen.wordpress.com/2015/07/22/build-a-honeypot-to-capture-embedded-malware/
https://www.cl.cam.ac.uk/~amv42/papers/vetterl-clayton-honware-virtual-honeypot-framework-ecrime-19-slides.pdf
https://0wned.it/2016/07/30/creating-a-highly-interactive-honeypot-with-honssh/
- common issues are outdated certificates, improper network time synchronisation, old/unpatched/misconfigured/unconfigured software,  etc... Countries which are poorer tend to have a higher tendency to have problems. They also seem to have more IP addresses listed on blacklists. Backbone companies such as Akamai, Netcraft, and consulting firms such as Accenture and Deloitte have a history of publishing results on this area. The only issue is their accuracy?
- there are a lot of strange systems out there and companies who have systems in places you wouldn't expect? What's interesting is the future architecture of the Internet. It seems to involve a lot of shields (Cloudfront, Akamai, BigIP, etc...), better defended systems, upgraded protocols, more centralisation (such as Big IT companies who do the heavy lifting), etc... I suspect the reason why some larger companies are having more success at defending against threats is that they do massive sweeps of the Internet? They basically lock out these corners and operate in their own safe little corner? The obvious irony is that if you have good local IT you may be better keeping things local because there is no real gain because the cloud service provider may be heavily reliant on a FOSS based backend
akamai annual server report
https://www.akamai.com/us/en/resources/our-thinking/state-of-the-internet-report/
cloudfront annual report
https://aws.amazon.com/cloudfront/reporting/
https://aws.amazon.com/cloudfront/?nc=sn&loc=0
- I know of people in the Cybersecurity world who considered using these search engines as a means of finding business. The obvious irony that I've found in my research is that a lot of the people who you may contact just don't care. Even if they operate in the so called security industry they often don't care and don't respond. I actually thought about building a crawler that would contact owners of infrastructure to inform them of potential issues in their network but realised if they don't care then what's the point (check my book on "Cloud and Internet Security")? I'd rather continue the research and re-direct it to networks that I watch over
- to this end I discontinued my research for the time being. I obviously have a precursor to something like Shodan and Binary Edge though?
shodan pricing
- something I found really weird is that legitimate servers I found had responses that were in English rather then the local language?
- it's not that difficult to add extra capabilities to this script so that you can do a light audit of systems on your network as well. I tried running a more advanced variant of this script against my own network. It was obviously designed for more sophisticated testing. It caused a lot of problems that I didn't anticipate. Certain types of scans and cracking can lead to various countermeasures kicking in. I actually had to reset certain systems to regain access. Malconfigured servers literally went down and had to be reset while some test systems (left in a vulnerable state) were obviously breached
- security best practice says you run unique usernames/passwords for each device or user in your network. A while back I came back with a theory about using chained and randomised hash functions/algorithms/components against unique components of your network/users. It's easier to keep track of an algorithm then it is to keep track of a zillion passwords. It can be facilitated by a mobile app on secure hardware (aspect common to classified networks). Possible on unsecure networks as well if you just want to try for fun
- I like to keep track of timing of assessments because it tells you how thorough someone may be. I rely heavily on automation but even then the process is slow
http://pages.cs.wisc.edu/~ace/media/gray-hat-hacking.pdf
legal kali linux
Yes it is 100% legal to use Kali Linux. Kali Linux is a operating system developed in collaboration with open source penetration testing software. It is operating system dedicated to Ethical Hacking. It is used by many professional in the field of cyber security.
https://www.quora.com/Is-it-legal-to-use-Kali-Linux
https://steemit.com/hacking/@ali1357/what-is-kali-linux-legal-or-illegal-c65fa8f1b038f
https://www.reddit.com/r/HowToHack/comments/2u8bxc/is_there_a_legal_way_to_practice_kali_linux_tools/
https://www.vulnhub.com/
- it's obvious that only a tiny group of these servers have advanced firewalls/edge systems. Otherwise, they would respond in ways more keeping to protocol across the board. Over time it becomes easier to 'see' hijacked systems. They just look different on scans from normal systems. If you end up spending enough time around computer networks you'll know them on sight
- I've obviously thought about automated patching systems, stealth/cloaking style technology, dynamic topology changes, etc... as ways of creating resilience against concerted attacks. Unrealistic unless you have control over the networks in question though
http://sites.google.com/site/dtbnguyen/
- the script that I used is as follows:
- description is as follows:
# I just wanted to see what the results would be if I did a tiny sample
# of the Internet what would be out there with regards to websites and
# whether the Internet could survive if DNS capability being taken out.
#
# I thought I'd end up with a lot of legitimate websites (like in my 
# DNS, Github, Amazon AWS S3 bucket enumeration experiments) but it ended 
# being more like the results from so called security search engines:
# Just random stuff is out there. A lot of it unconfigured, old,
# misconfigured, unpatched, etc...
#
# I didn't actually scan the entire Internet. I realised pretty early on
# that if I tried that the process it would likely last months (even if I
# optimised it and ran it again it probably wouldn't make much of a
# difference because there are many issues at play including network
# connections that need to be made, total scans that need to be done,
# download quotas, etc...). At most, I looked at a few hundred servers per
# country.
#
# Anyhow, this is the source code if you're interested. It's obviously very
# similar to be primitive hybrid enumerator/web crawler. It can easily
# be converted to something like Shodan or Binary Edge or to monitor/audit
# your own network as well. This will make more sense if as I work on other
# projects or as your experience grows. There's some randominsation thrown
# in to make things look less strange to monitoring systems.
#
# The source code as released obviously doesn't do anything. It's obvious 
# that you need to uncomment, run things in the correct sequence, and 
# modify in many of the right places for it to do anything significant (block
# against script kiddies). As a side note, you need to make significant
# changes for it to be used in an offensive capacity. It's primary use is
# for research/study.
#
# As this is the very first version of the program it may be VERY buggy). 
# Please test prior to deployment in a production environment.
#

Google Vulnerabilities:
- this leads me to my next point. It's obvious that Google and several other data mining/search engine companies are rigging their search results. Like the "fake news" phenomenon if you do a side by side comparison it becomes much more obvious. I'm reasonably certain that these aren't algorithmic issues or regional issues. They seem to index mostly US/Western sites and not as many foreign sites (even if they're in English and their content may be better?)?
Interview with Google Senior Software Engineer, Zach Vorhies
https://www.youtube.com/watch?v=cC_mBru78F4
Brainwashed Google Employees Unable to Critically Think w_ Zach Vorhies
https://www.youtube.com/watch?v=m26ZIHmdNkI
Why Google is now a Drug Company _ Maryam Unhinged
https://www.youtube.com/watch?v=tNAslW9zYRE
Why Google is Censoring Health News _ Maryam Henein _ Zach Vorhies
https://www.youtube.com/watch?v=4xjRmjs2rm8
search engines
- a lot of US/Western news aggregators (my strength in foreign languages isn't enough to check alternatives) are heavily biased in favour of the US/West and it's obvious that there are strong ties between FAANG and the US State Department. This can be a turn off to those who from other circles?
https://www.itwire.com/mobility/huawei-developing-own-search-engine-for-its-mobile-devices.html
google state department
- anything that is controversial and anti-Google, anti-US, anti-US allies, etc... seems to be scrubbed from results? The same service seems to apply across many US service providers?
conspiracy theorist google shadow ban
- we know that Google and other Big Data data miners tend to try and rig results in their own favour and often in favour of their own governments even their own services (it's well known that major IT companies often have good connections/relationships with the security services (Kaspersky with FSB/former KGB, Dell/Crowdstrike with NSA/CIA, Huawei/Baidu with with Chinese, etc...). Only when they get caught and reprimanded do they change. Even then it's only semi-compliance as well
Facebook & Google 'happy to hand over' user data to govts - Snowden
https://www.youtube.com/watch?v=BbWNFChCcOo
baidu chinese government
https://fortune.com/2019/04/09/eye-on-ai-china-artificial-intelligence/
https://en.wikipedia.org/wiki/Baidu
“Firms such as Huawei, Tencent, ZTE, Alibaba, and Baidu have no meaningful ability to tell the Chinese Communist Party ‘no’ if officials decide to ask for their assistance…Such aid may not necessarily occur routinely, but it certainly can occur—and presumably will—whenever the Party considers this useful and cares to demand it,” he said.
Ford also accused the Chinese companies of helping Beijing to develop, build, and maintain the techniques used for “a foundation of technology-facilitated surveillance and social control”, upon which he said the “China Dream” or “China Model” is built. Both terms refer to China’s efforts “to shape the world consistent with its authoritarian model,” he said. “As these companies export their products and services to the rest of the world, the security and human rights problems associated with this ‘China Model’ are progressively exported with them.”
https://qz.com/1708662/chinese-tech-giants-tools-of-the-communist-party-us-official/
kaspersky fsb
google rig results fine
The search engine manipulation effect (SEME) is the change in consumer preferences from manipulations of search results by search engine providers. SEME is one of the largest behavioral effects ever discovered. This includes voting preferences. A 2015 study indicated that such manipulations could shift the voting preferences of undecided voters by 20 percent or more and up to 80 percent in some demographics.[1][2]
The study estimated that this could change the outcome of upwards of 25 percent of national elections worldwide.
On the other hand, Google denies secretly re-ranking search results to manipulate user sentiment, or tweaking ranking specially for elections or political candidates.[3]
- experience of the FOSS world tells us that it's possible to do many things for free. Google, Facebook, Amazon, etc... Things that can be replicated easily elsewhere? I've actually thought about building my own mini-Internet/search engine of sorts using only sites that I care about and find useful. Wouldn't be that hard. Main issue is crawling speed and download quota 
- the don't have that big of a gap technologically. Some people have said that "you can't out Google Google". I think they're wrong. If you look deeper then you understand where they're weak. The only thing stopping some people from beating them is lack of funding?
- the following seem to be blocked/scrubbed from search results: file download sites (includes Torrents, file download sites, etc...), criminal activity (which has moved to the dark net. Obviously to stop these things going mainstream), certain bad information regarding US and allied companies and operations, things regarding national security, etc... This can't be easy. It's likely they ended up with a lot of false positives as well?
ban torrent google
- if you read much about what they are and what they do they seem to imply that they are benevolent. In reality, they are no different from many other companies
- if others decide to tax them "properly" they're in trouble. Since many governments are in economic trouble it makes sense that they should tax FAANG
tax us tech
- top down approach doesn't work well with science? We know that the US has a history of blocking free flow of knowledge. They have a strong belief in their own leadership. They create tiers in society, of knowledge, tiered access access to resources, etc... Obviously, this creates an advantageous situation for people who are already well off though and since the US came out on top after the World Wars for themselves as well?
suicide download science journal open source
https://www.newyorker.com/magazine/2013/03/11/requiem-for-a-dream
The MBB Lampyridae (Latin for Fireflies) was a low-observable medium missile fighter (MRMF) developed during the 1980s by the West German aerospace company Messerschmitt-Bölkow-Blohm (MBB).[1] The programme was terminated during 1987 without any production aircraft having been produced.[2]
As early as 1975, West Germany is known to have conducted research into the field of stealth aircraft. During 1981, work commenced at MBB on developing a design for a viable stealth aircraft; the effort was supported by a contract that had been issued by the German Air Force. Also known as the Medium Range Missile Fighter (MRMF), it had been conceived that a fighter could be both lighter and cheaper if it was so superior at mid-range combat as to allow it to discard the requirement to perform close-range combat. Having been developed independently of other stealth aircraft, such as the American Lockheed Corporation's Have Blue technical demonstrator and its follow-up F-117 Nighthawk stealth attack aircraft, the Lampyridae nonetheless utilized a similar approach to achieving its low-observable characteristics.
After determining the Lampyridae's design to be viable, development activity proceeded to the construction of a single three-quarter scale piloted aircraft. During 1985, wind tunnel testing of the design, including at transonic speeds commenced; two years later, a number of manned 'flights' inside the wind tunnel were performed, during which the favourable high-quality aerodynamic properties of the design. During 1987, the existence of the Lampyridae project and its design was revealed to the United States in the form of a group of United States Air Force (USAF) officers, who were shown the piloted model, which was kept in a closed-off section of MBB's manufacturing facility at Ottobrunn, Bavaria, Germany. That same year, the Lampyridae project was terminated for unspecified reasons; diplomatic pressure on the part of the US has been attributed.
https://en.wikipedia.org/wiki/MBB_Lampyridae
- they have been very dependent on the US DoD and US government for a lot so I guess that's why they do a lot of favours for them?
- what happens if unstoppable browsers are built that are anti-advertising and tracking (we know that in the recent past individuals and companies who built these technologies were basically bribed)? A lot of people are uncomfortable with being tracked (doesn't matter what the context is)
https://www.businessinsider.com/facebook-challenger-mewe-saw-revenues-jump-800-according-to-ceo-2020-2
adblock bribe
- there seems to be a small resistance movement against Google now? I admit I'd like to explore better options if they are out there
Criticism of Google includes concern for tax avoidance, misuse and manipulation of search results, its use of others' intellectual property, concerns that its compilation of data may violate people's privacy and collaboration with Google Earth by the military to spy on users,[1] censorship of search results and content, and the energy consumption of its servers as well as concerns over traditional business issues such as monopoly, restraint of trade, antitrust, "idea borrowing", and being an "Ideological Echo Chamber".
Alphabet Inc. is an American multinational public corporation invested in Internet search, cloud computing, and advertising technologies. Google hosts and develops a number of Internet-based services and products,[2] and generates profit primarily from advertising through its AdWords program.[3][4]
Google's stated mission is "to organize the world's information and make it universally accessible and useful";[5] this mission, and the means used to accomplish it, have raised concerns among the company's critics. Much of the criticism pertains to issues that have not yet been addressed by cyber law.
Shona Ghosh, a journalist for Business Insider, noted that an increasing digital resistance movement against Google has grown. A major hub for critics of Google in order to organize to abstain from using Google products is the Reddit page for the subreddit /r/degoogle.[6]
degoogle
- at it's core current search engine technology (across the board) is very limited. It just ranks data basically but doesn't really understand or do anything useful with the data. We know that Google (and others) are working on Quantum computing but I know that deep down even if they manage to make it work out they still need raw computer power cycles/speed. Quantum computing will require a set of sub- technologies to make it work out in the way that some scientists hope to make it will?
quantum computer google
https://www.scientificamerican.com/article/hands-on-with-googles-quantum-computer/
https://en.wikipedia.org/wiki/Semantic_search
- they only index a tiny proportion (0.03% based on some estimates) of the total Internet according to some people. That means there is huge potential for people to make out Google Google by simply indexing more pages
Exploring the Dark Web
https://www.youtube.com/watch?v=BN1NU0ivzj8
101 Facts About The Deep Web
https://www.youtube.com/watch?v=EUZGY1gQgnw
deep web search engine
https://www.yippy.com/
https://www.dailydot.com/layer8/best-deep-web-search-engines/
http://deep-web.org/how-to-research/deep-web-search-engines/
https://thehackernews.com/2016/02/deep-web-search-engine.html
https://en.wikipedia.org/wiki/Deep_web
https://www.deepweb-sites.com/deep-web-search-engines/
- you can tell that they've made comprises in their architecture by the search results that they have. For instance, number of related queries are limited, the size and type of indexing/ranking that they use is also limited,  total number of pages indexed is limited, types of queries can be limited/constrained/no regex searches allowed, index update time is limited, some correlations/relationships are defined asunidirectional rather then omnidirectional in their system, etc... You can identify this if you intelligently query their systems
- f you are observant you'll notice a lot of oddities/bugs/strangeness when you use their products. If you inform the company of bugs they'll sometimes say it wasn't designed to be used that way, come up with strange/canned answers, refuse to confirm/ignore you, etc...
Google has admitted some users’ private videos were sent to “unrelated users” who downloaded data through its Takeout service for a f
ew days in November – but wants you to pay the company to dig through your photos itself.
The search behemoth has quietly notified users of its Google Takeout service, which downloads a user’s Google Data archive, that an u
nspecified number of their private videos ended up in random users’ Takeout archives. The emails, sent a mere three months after the
fact, are ominously vague, merely letting the user know that “one or more videos in your Google Photos account was affected” by the b
ug between November 21 and 25 of last year. Nowhere are users told which videos, or in whose hands they ended up – a fact that will n
o doubt keep some users awake at night. The company did say that still photos were not affected in a statement to 9to5Google on Monda
y.
https://www.rt.com/usa/480094-google-photo-bug-subscription/
For some search results, Google provides a secondary search box that can be used to search within a website identified from the first search. It sparked controversy among some online publishers and retailers. When performing a second search within a specific website, advertisements from competing and rival companies often showed up together with the results from the website being searched. This has the potential to draw users away from the website they were originally searching.[169] "While the service could help increase traffic, some users could be siphoned away as Google uses the prominence of the brands to sell ads, typically to competing companies."[170] In order to combat this controversy, Google has offered to turn off this feature for companies who request to have it removed.[170]
According to software engineer Ben Lee and Product Manager Jack Menzel, the idea for search within search originated from the way users were searching. It appeared that users were often not finding exactly what they needed while trying to explore within a company site. "Teleporting" on the web, where users need only type part of the name of a website into Google (no need to remember the entire URL) in order to find the correct site, is what helps Google users complete their search. Google took this concept a step further and instead of just "teleporting", users could type in keywords to search within the website of their choice.[171]
- it relies on automation to maintain it's edge and profitability. That said, any bugs that are within it's system are exacerbated due to it's scale. If someone creates better AI or simply provides better service across the board (if you contact their support services and their contractors in particular you'll realise that a quality varies a lot (they have contractors for a lot of different stuff including HR, normal staff, quality rating, etc... I suspect some )) they'll lose their edge
google contractors appen
This setup highlights one of the many contradictions embedded in rater work. On the one hand, raters are supposed to represent average users, providing feedback that will help Google craft algorithms that serve the general public. On the other, raters have to stick with Google's interpretation of what an average user is—or risk getting their hours cut. One rater noted that the right answer on a task "often doesn't fit our experiences as real users outside of work."
https://arstechnica.com/features/2017/04/the-secret-lives-of-google-raters/
https://static.googleusercontent.com/media/www.google.com/en//insidesearch/howsearchworks/assets/searchqualityevaluatorguidelines.pdf
https://gighustlers.com/appen-review-scam-or-legit
Google has revived its transcription programs for Google Assistant, in which “human reviewers may listen to audio snippets [from users] to help improve speech technology,” according to a September 23, 2019 statement.
The statement, which focuses on Google’s beefed-up data privacy protections, explains that audio data from Google Assistant is not stored by default. Instead, users can opt in to help “improve the Assistant for everyone by allowing us to use small samples of audio to understand more languages and accents.”
Google’s new policy is that audio data from existing users not be included in any human review process unless users reconfirm this setting on their devices. During the transcription process itself, audio recordings are not associated with any user account.
Google suspended its transcription programs in July 2019 after a reviewer leaked confidential Dutch audio data. Google was in good company. Fellow tech giant Apple discontinued its own transcription practices in August 2019.
https://slator.com/demand-drivers/google-resumes-human-transcription-of-assistant-audio-content/
While the new ranking option addresses one particular problem highlighted by the Guardian and Observer, Google’s failure to keep fake news and propaganda off the top of search results is broader than simply promoting upsetting or offensive content.
Google has also been accused of spreading “fake news” thanks to a feature known as “snippets in search”, which algorithmically pulls specific answers for queries from the top search results. For a number of searches, such as “is Obama planning a coup”, Google was instead pulling out answers from extremely questionable sites, leading to the search engine claiming in its own voice that “Obama may be planning a communist coup d’état”.
The same feature also lied to users about the time required to caramelise onions, pulling a quote that says it takes “about five minutes” from a piece which explicitly argues that it in fact takes more than half an hour.
Shortly after each of these stories were published, the search results in question were updated to fix the errors.
https://www.theguardian.com/technology/2017/mar/15/google-quality-raters-flag-holocaust-denial-fake-news
In May 2011, Google cancelled the AdWord advertisement purchased by a Dublin sex worker rights group named "Turn Off the Blue Light" (TOBL),[109] claiming that it represented an "egregious violation" of company ad policy by "selling adult sexual services". However, TOBL is a nonprofit campaign for sex worker rights and is not advertising or selling adult sexual services.[110] In July, after TOBL members held a protest outside Google's European headquarters in Dublin and wrote to complain, Google relented, reviewed the group's website, found its content to be advocating a political position, and restored the AdWord advertisement.[111]
In June 2012, Google rejected the Australian Sex Party's ads for AdWords and sponsored search results for the July 12 by-election for the state seat of Melbourne, saying the Party breached its rules which prevent solicitation of donations by a website that did not display tax exempt status. Although the Sex Party amended its website to display tax deductibility information, Google continued to ban the ads. The ads were reinstated on election eve after it was reported in the media that the Sex Party was considering suing Google. On September 13, 2012 the Party lodged formal complaints against Google with the US Department of Justice and the Australian competition watchdog, accusing Google of "unlawful interference in the conduct of a state election in Victoria with corrupt intent" in violation of the Foreign Corrupt Practices Act.[112]
- international laws makes it a nightmare to run a larger company. You have to codify a lot of local laws into the source code. This isn't as easy as it sounds
- tries to do too much with what feels like a single algorithm or a series of overly simplified algorithms for many areas. Further modularity and data fidelity may yield better results? Obvious, I would take data mining technology down a very different route and try to harness existing knowledge of people out there rather then going down a pure machine based route?
- worker revolt sounds like a minor issue but isn't in their case. A lot of them applied to Google because they thought Google was a  "good company". In reality, WikiLeaks, it's practices, worker rights, secret deals, etc... have shown it to be not much different from other companies

Random Stuff:
- as usual thanks to all of the individuals and groups who purchase and use my goods and services
- latest in science and technology
- latest in finance and politics
Giving Thanks for Good News - #NewWorldNextWeek
- latest in defense and intelligence
Call now and rent US troops? But wait, there is more!
6 Things Media Won’t Tell You About Assassination of Iranian General (Web Exclusive)
Procurement: China Finally Masters Jet Engines
- latest in animal news
- latest in music and entertainment
Gary Connery's Wingsuit Landing without using a Parachute
VIDEO: 'Jetpack men' perform jaw-dropping stunt
Dream Lines IV - Wingsuit proximity by Ludovic Woerth & Jokke Sommer
Best of Wingsuit Proximity Flying 2013
Best of Wingsuit Proximity Flying 2014
Best of Wingsuit Proximity Flying 2015
GoPro - Red Bull Stratos - The Full Story
Felix Baumgartner's supersonic freefall from 128k' - Mission Highlights
Felix Baumgartner Space Jump World Record 2012 Full HD 1080p [FULL]
GoPro - Wingsuit Pilot Jeb Corliss on His Crash and Recovery

Random Quotes:
- Diwali, Deepavali or Dipavali is the festival of lights, which is celebrated by Hindus, Jains, Sikhs and some Buddhists every autumn in the northern hemisphere (spring in southern hemisphere).[5][6][7] One of the most popular festivals of Hinduism, Diwali symbolises the spiritual "victory of light over darkness, good over evil and knowledge over ignorance." Light is a metaphor for knowledge and consciousness.[8][9][10] During the celebration, temples, homes, shops and office buildings are brightly illuminated.[11] The preparations, and rituals, for the festival typically last five days, with the climax occurring on the third day coinciding with the darkest night of the Hindu lunisolar month Kartika. In the Gregorian calendar, the festival generally falls between mid-October and mid-November.[12]
- No matter how bad things become, only a few people will be ready to accept that the Economy it is not a good thing. We believe in Economy, we do not discuss its existence, precisely as a religion does with God. Instead of an afterlife, we have a Money-God allowing us to do everything we want, or at least this is what we believe. Everything consists of believing in something – this is why is difficult to understand the term “religion” in its wider meaning.
But why the Economy is not a good thing? Because it is not possible to create prosperity for all. Wealth is a relative term, it is something that can belong only to a few. I can be rich only if you are poor; wealth is a relation: there has to be poverty in order to have wealth.
- “At the risk of sounding sentimental, I’ve always felt there are people who can leave an indelible mark on your soul, an imprint that can never be erased.” 
― Agent Broyles
- Many people in every society have strong likes and dislikes. But we practise tolerance to maintain cohesion. We keep our feelings in check for the greater good. Else, there would be anarchy on the streets.
- “The only thing necessary for the triumph of evil is for good men to do nothing.”― Edmund Burke (in a letter addressed to Thomas Mercer).
- “This might sound like science fiction, but space agencies and private companies around the world are actively trying to turn this aspiration into reality in the not-too-distant future,” said Professor Krausz who is from the ANU Research School of Chemistry. He was a co-author on a paper detailing the work of the team which appeared in the journal Science.
“Photosynthesis could theoretically be harnessed with these types of organisms to create air for humans to breathe on Mars. Low-light adapted organisms, such as the cyanobacteria we’ve been studying, can grow under rocks and potentially survive the harsh conditions on the red planet,” he added.

Dodgy Job Contract Clauses, Random Stuff, and More

- in this post we'll be going through dodgy job contract clauses. Ironically, many of which are actually unlawful and unenforceable on c...