000webhost

Web hosting

Thursday, August 11, 2011

Content Management Sytem Evaluation

At some stage, most companies discover that due to their increasing size its hard to keep track of what everyone is doing. Therefore, they will require some form of Intranet to be implemented. Obviously this happened at a company that I was working at. As such, we had to evaluate the many options available to us available at the time. Obviously, there are various classes/types of CMS's. Personal ones favour a flat file structure with no multi-user functionality while the more complex/professional systems use database backends, a web based front end, have the ability to authenticate via a network directory system and have many options with regards to add-ons which allow for extra functionality.

One thing you'll need to watch for is whether there is too much of a learning curve in using the application itself. I noticed this myself when using the more mature systems such as Joomla. In such cases, either user training is going to be required and/or the content may need to be managed by your IT team. For mid-sized companies Drupal is a decent compromise between scalability and functionality.

Installation is fairly simple and similar to other web based applications. Install/untar the packages. Configure your web server. Setup the database backend. Configure the relevant files in order to authenticate correctly via the relevant backend. Thereafter, add any extra functionality as and when required.

http://en.wikipedia.org/wiki/List_of_content_management_systems

- as usual thanks to all of the individuals and groups who purchase and use my goods and services
http://sites.google.com/site/dtbnguyen/
http://dtbnguyen.blogspot.com.au/

Jocular Computing

One of the things that has always been entertaining for me is when technology attempts to interact with humans in a non-trivial fashion. While I was evaluating new phone systems for an old company I had/saw the opportunity to be able to experiment with some new phone lines with our VoIP phones. One of the things that I was working on was having the phone system 'attempt to tell a joke'. This idea was born from my previous experience with eMac systems back at University which had the ability to randomly tell a joke. They did this via a speech recognition system which had limited vocabulary (a group of phrases in order to reduce the search space and is a technique that I've seen commonly used on OEM speech control systems such as that which comes standard with Toshiba laptops). It also relied on an XML based file that contained the actual jokes itself. Thereafter, a randomised algorithm was used to select which joke to tell and what type of joke to tell (Knock knock, Why did the Chicken cross the road, etc...) While it was not quite as entertaining as a real human it was at least amusing and provided me with the idea for my little experiment. Random jokes from the web pulled using a web scraper or else manually downloaded which were then reformatted to be placed into a flat text file as follows.

'line number' 'tab character' 'joke'

Therafter, when and if required the string was encoded to 'wav' and/or another sound file format as required. Then a random number was chosen every once in a while to determine which particular joke to encode as a sound file. Then, when you call the, 'Joke line' a script is called to determine which file to play via 'espeak', 'festival', or any other speech synthesis software. Obviously, I tried playing around with speech recognition but when using the phone as a microphone on a network with 'jumpy' traffic on a VoIP based phone system this can make things a bit difficult. Maybe when I find myself on a network with more managable traffic I'll continue this line of research? Some of the results were very interesting. If I remember correctly though my experiments seemed to suggest that the load ratio would be about 30 people to a single server (Dual Xeon 2.8/4GB/10K SAS) before there would be a drastic drop in performance if I was thinking about completely automated phone based interviewing (a tangent that I was thinking about when I was working on web based interviewing technology (auto-generated code which worked around existing survey scripting languages) which was backwards compatible with SPSS Quancept scripts). Below are some of the notes from my research.


- http://www.voip-info.org/wiki/view/Asterisk+cmd+Festival, text2wave is basically a wrapper script for festival. Uses a LISP type language?
- init.scm and .festivalrc are two config files that are read at initialisation
- Utterance structure, http://www.cstr.ed.ac.uk/projects/festival/manual/festival_14.html
- english.wav is
Audio,araw,Mono,22050Hz,16
- jokes-clean.wav is
Audio,araw,Mono,8000Hz,16
man

- sox foo-in.wav -r 8000 -c 1 -s -w foo-out.wav resample -ql

; ######################################
; English Accent
; ######################################

exten => *777,1,Answer
exten => *777,2,Wait(1)
exten => *777,3,NoOp
exten => *777,4,System(/usr/bin/english)
exten => *777,5,Playback(/tmp/english-out)
exten => *777,6,Hangup
-
translates a sound file in SUN Sparc .AU format into a Microsoft .WAV file, while
sox -v 0.5 file.au -r 12000 file.wav mask
- notes to self, 8000Hz is completely incomprehensible, 22000Hz is much more realistic sample rate

- as usual thanks to all of the individuals and groups who purchase and use my goods and services
http://sites.google.com/site/dtbnguyen/
http://dtbnguyen.blogspot.com.au/

Sunday, August 7, 2011

Server and Link Load Balancers

Once upon a time I worked in a company that had multiple terminal servers. It soon became apparent that they weren't balanced in terms of work load. Due to financial constraints I ended up writing a fairly simple load balancer which is being augmented over time as tme permits (the basic structure of the original script will be posted to my GitHub and/or website eventually). The way it works is fairly simple. A fixed number of servers which is listed in a hash/dictionary data structure is iterated over. If the socket connections can be made to all servers than a naive algorithm which is composed of a random number which is 'mod' by the total number of servers is used in order to decide which server to connect to. However, obviously if a server/servers are down then these are removed from the data structure. Subsequent revisions have involved automatically mapping the network using NMAP, gaining server status information using WMI/SNMP,and storing/updating this data using SQLite to achieve a drop-in software based load balancer (the original script/program was written using a combination of Python and Perl but I may assimilate and/or port it to another language if and when appropiate).

If you do have the resources it may be worthwhile exploring some of the commercially available systems. Many of these options and other mature software based load balancing options are listed in, "Building a Cloud Computing Service", http://dtbnguyen.blogspot.com
The interesting thing though is how there is an increasing trend towards multi-level load balancing and redundancy. For example, application servers are often able to maintain state so that users do not have to re-login during server capitulation and obviously there many layer 3/4 load balancers available as well. In fact, even consumer networking companies are creating their own products nowadays.

http://haproxy.1wt.eu/

- as usual thanks to all of the individuals and groups who purchase and use my goods and services
http://sites.google.com/site/dtbnguyen/
http://dtbnguyen.blogspot.com.au/

Open Source Financial Trading

Recently, I've been speaking to some people in the trading/finance industry. One thing which struck me was the nature of software and how most of it was built in house due to its domain specificity. Recently though, its become fairly obvious that this is beginning to change. There have been attempts by various people to Open Source software that is capable of completing such transactions. Some of these are highly specific while others are more general to allow you to add modules of your own to suit your application Obviously, there will be some differences with regards to their suitability for simulations and the real world as well.

http://www.fixprotocol.org/fast

- as usual thanks to all of the individuals and groups who purchase and use my goods and services
http://sites.google.com/site/dtbnguyen/
http://dtbnguyen.blogspot.com.au/

Unlocking Your Phone/Modem

It used to be the case that unlocking a communications device could only be done by a service provider for a fairly hefty fee. Now, most devices can be unlocked for minimal cost with only the IMEI number from the device and a program that can be purchased for a nominal fee or even free from the Internet (or increasingly even the handset/service provider). Of course there are other methods as well as such as re-flashing the device with generic firmware. I've also seen Universal SIM card holders/convertors which may require trimming/cutting your SIM card which brings me to another point. One of the more interesting technologies that I've seen is a dual SIM card holder which can be used in your phone in order to provide you with the ability to be able to make/listen to multiple calls at the same time.

All this said, please ensure that your device supports the frequencies required by the network that you subsequently need/want to use.

http://forums.whirlpool.net.au/archive/1262036

- as usual thanks to all of the individuals and groups who purchase and use my goods and services
http://sites.google.com/site/dtbnguyen/
http://dtbnguyen.blogspot.com.au/

Tuesday, July 26, 2011

GroupWare Evaluation

This project was borne from the fact that the company wanted an inexpensive option/s with regards to intra-company communication. To this end we looked as both Open Source webmail, groupware, and solutions that were currently available. Note that these results were from an evaluation of the primary options that were available about 1.5 years from the date of publication of this particular post and may or may not be applicable to your particular situation or needs. Please do your own due diligence.

Zimbra
Seems quite responsive but not sure of the specifcations on their evaluation box. Standard/Advanced GUI that are accessible at the touch of a button. When installed on local (desktop) class hardware was not as responsive as expected. Checked minimum/recommended specifications for hardware and project does not seem feasible from a financial perspective any longer. Even when looking at minimum specifications on their website they seem to be far in excess of the financial resources available to this company. Great desktop client although it does seem to be resource intensive on our Dell GX260/270/280's. Obviously one of the first real Open Source alternatives as well so has a strong development history and community behind it.


Horde
Has most of the capabilities that we require but doesn't integrate particularly well with Outlook. Obviously there is the learning curve problem with regards to the web based interface. What we currently use but based on user feedback it could be better. Latest version has come quite a long way. Suprisingly, synchronisation with your PDA is a realistic/simple option now.


OpenGroupWare
Only took a very cursory look at this. LiveCD in German. Interface was unwelcoming and highly doubt that users will be inclined to use it. Did not continue with evaluation. Moreover, project seems to have become dormant. Last post on homepage for website seems to be from 2009. Does provide integration with various MUA's though.


Exchange
Basically, this is what our users want but due to financial contraints we are unable to pursue it as our network is composed of a number of Linux servers as well as a small number of Windows based terminal servers which can not be re-deployed to meet GroupWare needs. Moreover, the price of properly specified hardware for a Windows based server that is able to run Exchange does not fit into our budget. We calculate that that the project may range anywhere between five and six figures depending on whether or not we factor in future expansion needs.

http://www.microsoft.com/exchange/en-us/default.aspx

Zarafa
The fact that this software is an atrociously good representation of Microsoft Exchange has done it many favours. Basically works on top of existing Linux stack with a web based interface on top. Just like Exchange though uses MAPI error codes which can be almost impossible to decipher without a proper reference source. During evaluation we discovered that some services may need to be restarted to ensure proper/continuous operatation without intervention but we believe that this may have more to do with mis-configuration since there doesn't seem to be many other users online experiencing this same problem at this stage. Strong integration with Outlook but does seem to have a few problems with offline caching mode which we later decided to turn off (may be fixed by the time of publication of this post). Pricing is reasonable (four to five figures).


OpenXchange
Web interface reminiscent of Exchange. Pricing structure is reasonable (four figures). Has Outlook plugin to allow for better integration. A possible contender but was not mature as Zarafa was at the time of evaluation. Pricing is reasonable at first glance.

http://oxpedia.org/index.php?title=Main_Page_AE#quickinstall

- as usual thanks to all of the individuals and groups who purchase and use my goods and services
http://sites.google.com/site/dtbnguyen/
http://dtbnguyen.blogspot.com.au/

Introduction to Modern Finance

While finance/trading/commerce has existed for several thousand years, never has the number of financial and type of options available been so extravagant and exotic. Below is a reading list of the basic asset classes and some of the terms that you will commonly find in today's financial markets. This list will be updated over time.

http://idm.net.au/article/trading-titans%E2%80%99-clash-swords-discovery-clash

- as usual thanks to all of the individuals and groups who purchase and use my goods and services
http://sites.google.com/site/dtbnguyen/
http://dtbnguyen.blogspot.com.au/

Friday, July 22, 2011

WAN Acceleration on a Budget

Once upon a time computers were completely independent. With the advent of networking (especially the Internet) latency, bandwith, and redundancy are playing increasingly important roles. This is where WAN optimisation technology comes in. Depending on its configuration it can be configured to act as a proxy, and/or a reverse proxy or even in multi-node configuration that is used to accelerate LAN (most likely between multiple sides/nodes in a VPN) as well as WAN traffic. There are obviously many different vendors attempting to sell there wares which include hardware appliances, virtualised appliances, as well as complete software solutions whether they are installed on a dedicated server and/or your local desktop. Based on what I've seen they operate on multiple levels that may be protocol dependent and/or not. Below are some of the more inexpensive options out there along with premium options further down.



Thursday, July 21, 2011

Inexpensive NAS/SAN Device Evaluation Results

These are the results of experimentation using both servers and desktop hardware using software based NAS/SAN solutions. The servers were single/dual chip Xeon systems with 4GB RAM and 7.2K/10K SAS/SATA drives and GigE while the desktop systems (which were always the location of the NAS/SAN software installation) were either a Celeron with 512MB/1GB/2GB and 7.2K SATA drives or an Intel E8400 with 4GB RAM and 7.2K SATA drives. GigE was used whenever possible although we had to resort to using 10/100 Ethernet sometimes due to resource constraints. OpenFiler as well as FreeNAS were obviously tested. There is no guarantee that these results are valid across the board due to the low spec hardware being used and are as much for personal interest as for experimental validation. Moreover, the software versions being used are probably about 1.5 years old from the date of publication of this post. Note that Windows based NAS/SAN solutions are also available such as from StarWind who make both target as well as initiator software for Windows Server based operating systems.

OpenFiler stats in RAID1 configuration using NTFS with default blocks over iSCSI over GigE on server-d3. Results are very consistent.
~50-60MB/s read via iSCSI
~50-60MB/s write via iSCSI

[root@server-c1 test]# smbclient '//192.168.200.218/vgname.vol.share'
WARNING: The "printer admin" option is deprecated
Password:
Domain=[OPENFILER.OPENDO] OS=[Unix] Server=[Samba 3.2.6]
smb: \> mget *
Get file CentOS-5.3-i386-bin-1of6.iso? yes
getting file \CentOS-5.3-i386-bin-1of6.iso of size 654186496 as
CentOS-5.3-i386-bin-1of6.iso (19788.6 kb/s) (average 19788.6 kb/s)
smb: \> mget *
Get file CentOS-5.3-i386-bin-4of6.iso? y
getting file \CentOS-5.3-i386-bin-4of6.iso of size 662644736 as
CentOS-5.3-i386-bin-4of6.iso (21133.0 kb/s) (average 20443.0 kb/s)
Get file CentOS-5.3-i386-bin-3of6.iso? y
getting file \CentOS-5.3-i386-bin-3of6.iso of size 665085952 as
CentOS-5.3-i386-bin-3of6.iso (19755.4 kb/s) (average 20207.0 kb/s)
Get file CentOS-5.3-i386-bin-5of6.iso? y
getting file \CentOS-5.3-i386-bin-5of6.iso of size 668745728 as
CentOS-5.3-i386-bin-5of6.iso (20591.9 kb/s) (average 20302.7 kb/s)

OpenFiler stats in RAID1 configuration using NTFS with default blocks over iSCSI over GigE. Results are extremely variable depending on the file/s being tested.
~60-110MB/s read via iSCSI
~10-30MB/s write via iSCSI

OpenFiler stats in RAID0 configuration using NTFS over iSCSI over 10/100 Fast Ethernet.
10.4MB/s via iSCSI

OpenFiler stats in RAID0 configuration using NTFS over iSCSI over 10/100 Fast Ethernet.
10.4MB/s via iSCSI

OpenFiler stats in RAID0 configuration using Ext3 over SMB over 10/100 Fast Ethernet.

smb: \> mput ENDIAN*
Put file ENDIAN-FIREWALL-SOFTWARE-APPLIANCE-DEMO.iso? yes
putting file ENDIAN-FIREWALL-SOFTWARE-APPLIANCE-DEMO.iso as
\ENDIAN-FIREWALL-SOFTWARE-APPLIANCE-DEMO.iso (9150.8 kb/s) (average
9150.8 kb/s)
smb: \> mput openfiler*
Put file openfiler-2.3-x86-disc1.iso? yes
putting file openfiler-2.3-x86-disc1.iso as
\openfiler-2.3-x86-disc1.iso (8907.6 kb/s) (average 8990.8 kb/s)
smb: \> mput elastix*
Put file Elastix-1.5.2-stable-i386-bin-31mar2009.iso? yes
putting file Elastix-1.5.2-stable-i386-bin-31mar2009.iso as
\Elastix-1.5.2-stable-i386-bin-31mar2009.iso (9147.2 kb/s) (average
9077.9 kb/s)

OpenFiler stats in RAID1 configuration using Ext3 over SMB over 10/100 Fast Ethernet.

[root@server-m ISOS]# smbclient '//192.168.200.218/raid1v.raid1vn.SHARE'
Password:
Domain=[OPENFILER.OPENDO] OS=[Unix] Server=[Samba 3.2.6]
smb: \> dir
. D 0 Wed Jun 10 16:54:56 2009
.. D 0 Wed Jun 10 16:54:56 2009

44010 blocks of size 16777216. 41762 blocks available
smb: \> mput ENDIAN*
Put file ENDIAN-FIREWALL-SOFTWARE-APPLIANCE-DEMO.iso? y
putting file ENDIAN-FIREWALL-SOFTWARE-APPLIANCE-DEMO.iso as
\ENDIAN-FIREWALL-SOFTWARE-APPLIANCE-DEMO.iso (7962.9 kb/s) (average
7962.9 kb/s)
smb: \> mput openfiler*
Put file openfiler-2.3-x86-disc1.iso? y
putting file openfiler-2.3-x86-disc1.iso as
\openfiler-2.3-x86-disc1.iso (8888.7 kb/s) (average 8542.7 kb/s)
smb: \> mput elastix*
Put file Elastix-1.5.2-stable-i386-bin-31mar2009.iso? y
putting file Elastix-1.5.2-stable-i386-bin-31mar2009.iso as
\Elastix-1.5.2-stable-i386-bin-31mar2009.iso (8855.3 kb/s) (average
8715.4 kb/s)

FreeNAS stats in a RAID 0 formatted configuration using 10/100 Fast Ethernet with UFS using NFS.

-rw-r--r-- 1 root root 180313088 May 9 05:01
hyperic-hq-installer-4.1.2-win32.msi

[root@localhost Hyperic]# time `cp hyperic-hq-installer-4.1.2-win32.msi /mnt/freenas/`

real 0m33.350s
user 0m0.034s
sys 0m0.503s

-rwxr----- 1 root root 482002540 Apr 22 22:57
zcs-5.0.14_GA_2850.RHEL5.20090303142201.tgz

[root@localhost ~]# time `cp zcs-5.0.14_GA_2850.RHEL5.20090303142201.tgz /mnt/freenas/`

real 0m48.765s
user 0m0.081s
sys 0m1.566s

-rwxr--r-- 1 root root 19509248 May 5 09:48 WindowsXP-KB936929-SP3-x86-ENU.exe

[root@localhost ~]# time `cp WindowsXP-KB936929-SP3-x86-ENU.exe /mnt/freenas/`

real 0m3.386s
user 0m0.005s
sys 0m0.052s

Definitely hit a bottleneck somewhere here. Dead flat line on transfer test now...

FreeNAS stats in a RAID 0 formatted configuration using 10/100 Fast Ethernet with UFS.
11MB/s FTP
8.6MB/s SMB

FreeNAS stats in a RAID 0 formatted configuration using 10/100 Fast Ethernet with 2GB RAM/64 KByte Blocks and then remount as a iSCSI target.
9.2MB/s iSCSI via HD-Tune

FreeNAS stats in a RAID 0 Configuration using 10/100 Fast Ethernet with 2GB RAM/64 KByte Blocks.
10.4MB/s iSCSI via HD-Tune

FreeNAS stats in a RAID 0 Configuration using 10/100 Fast Ethernet with 2GB RAM/4096 Byte Blocks.
10.4MB/s iSCSI via HD-Tune

FreeNAS stats in a RAID 0 Configuration using 10/100 Fast Ethernet with 2GB RAM.
10.4MB/s iSCSI via HD-Tune

FreeNAS stats in a RAID 0 Configuration using 10/100 Fast Ethernet with 512 RAM.
10.2MB/s iSCSI via HD-Tune

FreeNAS stats in a RAID 1 Configuration using 10/100 Fast Ethernet with 512 RAM.
11MB/s FTP
10MB/s SMB
9.8MB/s iSCSI via HD-Tune
4MB/s SCP
1.3 MB/s HTTP

Test LACP later using ProCurve (Switch_C) in order to determine impact of link aggregation.

OpenFiler was attempted and worked in a single disk configuration with LDAP authentication but when it ran with RAID 1 and LDAP the entire graphical interface seemed to break. For instance, evaluation page seemed to stop.

Things to definitely check for when building a software based NAS are cable connections (especially IDE/SATA cables), power supply capacity, adequate bandwidth, and a means through which to test the bandwidth of your connection.

Obviously, FC is not within our current budgetary means.

If had the budget to build a hardware based DAS/NAS/SAN solution for under 1K would most likely look at following solutions:

- Synology DS-209 (good feature set, high price, fast transfer rates)
- Netgear ReadyNAS Duo and its variants (good feature set, reasonable price, reasonable transfer rates)
- Thecus N2050B (very few reviews available though and is technically a DAS device that uses eSATA)

Options that would definitely avoid are the following:

- DLink DNS-323 (has problems with RAID rebuild based on reviews)
- Linksys NAS200 (slow transfer rates, 10/100 only)
- NetGear SC-101/101T (not a real NAS/SAN, 10/100 only, limited number/type drives can be used)

- as usual thanks to all of the individuals and groups who purchase and use my goods and services
http://sites.google.com/site/dtbnguyen/
http://dtbnguyen.blogspot.com.au/

Wednesday, July 20, 2011

Apache Web Server Configuration

Its become almost essential for any IT professional to be able to setup a web server nowadays. While the advent of so called personal web server solution stacks (such as XAMPP) have aided testing/development its still necessary to understand a number of concepts in order to configure it.


First, a web server is just a means of delivering of static/non-static content to an end user via computer networking.

Second, content is going to be delivered form a single actual directory which is 'pointed to' via DNS records on the Internet to a physical/virtual host.

Third, through the advent of Virtual Hosts many web servers are often able to serve more than website at a time (based on IP and/or name).


Fourth, the basic web server is often able to only host basic content or needs to be reconfigured in order to provide additional functionality. For example, you may require additional modules such as MySQL or a 'Handler' in order to provide for this.


Fifth, not all web servers are created equal. Like most other products and services, a web server can be designed with certain parameters in mind such as speed, security, or even certain technology frameworks. These additions can be particularly important in deciding which server you may end up choosing depending on the circumstance/s. For example, so called application servers which can have fairly specific capabilities that are not often available in conventional web servers.

http://www.vogella.de/articles/ApacheTomcat/article.html

- as usual thanks to all of the individuals and groups who purchase and use my goods and services
http://sites.google.com/site/dtbnguyen/
http://dtbnguyen.blogspot.com.au/

Tuesday, July 19, 2011

FInancial Protocol Interfacing

It used to the case that if you had an interest in a particular field/domain you had to purchase/trial expensive proprietary software to enable you to 'play with it'. Now that is no longer the case. A lot of the major software standards have become standardised which has led to a proliferation of commerical implementations as well as open source alternatives which can be used to simulate real environments through test driven development based on black box testing. Play aroud with virtual interfaces, a bit of fuzzing/random data, random lagging, as well as a proper test bench which is able to supply a large set of data and you could very well accurately simulate real world communications exchange. For the sake of brevity though, we'll leave details of how to achieve this for the reader.

From a personal perspective, while you may not be able to directly interface with an exchange you are still more than able to track movements of various assets and indexes by interfacing with various web based sources of information through a web scraper and then using this information in combination with internal/external algorithms which provide a recommendation of whether to buy/sell in order to a interface with an Internet based broker to conduct your transactions. While there is evidence to suggest that technical/human trading can fall behind tracking an index over the long term (especially after taxes are factored in) it is said that contrarian theory may be able buck this trend and rarely a small group of firms are able to beat the market over the long term. However, I'm not entirely sure this factors in all types of trading out there.



Sane and Sensible Hierarchies/Organisational Structures, Random Stuff, and More

- in this post we'll look at professional sports and other fields and how stuff from that field can be used in the business world and ho...