Recently, we barely scratched the surface of characterizing fricative phonemes. Now, how to characterize nasal consonants acoustically? A good answer to this question would require plenty of explanations. I suggest you to check on pages 487-514 of Acoustic Phonetics by Professor Kenneth Stevens. But I’ll provide you with some hints, anyway. Nasal consonants are sonorant phonemes, but they exhibit significant losses due to the nasal tract coupling. Further, nasal spectra is relatively very stable during the oral tract closure (there are minimum acoustic alterations). Typically, F1 is located near to 250 Hz, F2 is weak, and F3 is near to 2 kHz. Remember that for these phonemes the acoustic energy also transits the nasal cavities. Such nasal cavities have different frequency properties. But the oral tract, albeit closed, also alters the acoustic transfer function. This transfer function, for simple phonemes such as vowels, includes only poles. However, when the oral tract is closed, the acoustic transfer function also includes zeros. And that changes the output in a great deal. The location of the first spectral zero of nasal consonants depends on the point of oral closure (for instance, the point of closure for /m/ is more anterior than /n/’s).
Category: programming
Hints at Speech Inverse Filtering of Fricative Phonemes
For my thesis, I developed my own inversion toolbox. But no matter the toolbox, you require a “source” of information for inversion. That information may be spectral energy distribution, formants, etc.
Is it possible to invert fricatives by using Childers’ Toolboxes?
At first sight, I think that the answer is that you can’t. IIRC, Childers’ toolbox allowed for inversion of the sentence “we were away a year ago”. But that’s a very convenient sentence to invert, because most of its relevant acoustic information can be clearly seen with a formants analysis. Nevertheless, that’s not the case for fricatives (and nasals, for instance, have other interesting problems too).
For my thesis, I developed my own inversion toolbox. But no matter the toolbox, you require a “source” of information for inversion. That information may be spectral energy distribution, formants, etc. For fricatives, formants are out-of-question. Fricatives’ spectrum differs importantly from voiced phonemes’, as you know. When we utter fricatives, the oral tract naturally adopts a specific “constriction” configuration… and such configuration would yield a formantic structure. The problem is that turbulence generated in the oral tract hides resonances, and that’s why formant tracking is misleading in such cases.
Continue reading “Hints at Speech Inverse Filtering of Fricative Phonemes”
Resources for Articulatory Synthesis Research
A list of important documents for Articulatory Speech Synthesis and Inversion research.
The articulatory approach is a very captivating research topic, but it’s relatively hard, and is based on a hefty amount of multidisciplinary documents and results. Germane papers and books are somewhat old or difficult to find. This is my list of selected resources:
Books
- Gunnar Fant – Acoustic Theory of Speech Production
- James L. Flanagan – Speech Analysis, Synthesis and Perception
- Kenneth N. Stevens – Acoustic Phonetics
- Paul Boersma – Functional Phonology
- J. M. Pickett – The Acoustics of Speech Communication
- D. G. Childers – Speech Processing and Synthesis Toolboxes
- A. Seikel, D. King and D. Drumright – Anatomy and Physiology for Speech, Language and Hearing. Lovely book.
Continue reading “Resources for Articulatory Synthesis Research”
A Central Abstraction: The Process
Abstractions
I do strongly believe in abstraction being the root of computing (however, you may want to read Is abstraction the key to computing? as a motivation for a different perspective on the role of abstraction in computing). Modern hardware and software systems include a lot of features and perform so many tasks that it is impossible to understand, build and use them without recurring to abstractions. For instance, let’s take a look at the CPU: it is the central part of a general purpose computing system, and is also an extremely complex system in itself. Functionally, a CPU is an instruction-crunching device: it processes one instruction after another, following the steps of fetch, decode, execute and writeback (in von Neumann architectures). In other words, the CPU retrieves the instruction from memory, decodes it, executes it, and put the results of the operation back into memory. Further, the CPU has no clue (and actually does not care) about the higher-level semantics of the instruction it may be executing at a specific time. For example, the CPU may be executing an instruction related to a spell-checking task, and a few instructions later it may be executing an instruction related to other task, say, MP3 playing. It only follows orders, and just execute the instruction it is told to execute.
Nowadays, computing systems are expected to do more tasks on behalf of its users. Several tasks must be performed concurrently. As in the previous example, the system might be running the spell-checker and the media player simultaneously. In multiprogrammed systems we can achieve pseudoparallelism by switching (multiplexing) the CPU among all the user’s activities (true parallelism is only possible in multi-processor or multi-core systems). Remember that multiprogramming requires the CPU being allocated to each system’s task for a period of time and deallocated when some condition is met.
Continue reading “A Central Abstraction: The Process”
Retrieving system time: gettimeofday()
Some time ago, a friend of mine reported a problem with gettimeofday()
under MinGW. It was a relatively common error: 'gettimeofday' undeclared (first use this function)
. Cause and solution of this problem is kind of easy, and we’ll present it at the end of the post. However, what’s that function gettimeofday()
?
gettimeofday() is a function for retrieving system time in POSIX-compliant systems. Unlike the time() function, which has a resolution of 1 second, gettimeofday()
has a higher resolution: microseconds. Specifically, the prototype of gettimeofday()
is:
int gettimeofday (struct timeval *tp, struct timezone *tzp)
The function retrieves the current time expressed as seconds and microseconds since the Epoch, and stores it in the timeval
structure pointed to by tp
. The struct timeval
has the following members:
long int tv_sec
: Number of whole seconds of elapsed time.
long int tv_usec
: The rest of the elapsed time (a fraction of a second), represented as the number of microseconds.
Thanks to the tv_usec
member, we have a resolution of microseconds. It’s also important to remember what the Epoch is. The Epoch is just an arbitrary starting date set by the system in order to compute time, i.e., it’s a reference or base time. For instance, POSIX-compliant systems measure system time as the number of seconds elapsed since the start of the epoch at 1970-01-01 00:00:00 Z.
On its side, the struct timezone
was used to return information about the time zone. However, using this parameter is obsolete (e.g., it has not been and will not be supported by libc or glibc). Therefore, tzp
should be a null pointer, else the behavior may be unspecified (check your system’s specifications).
gettimeofday()
returns 0 for success, or -1 for fail. Simple. Further, this function should be available in sys/time.h
. But my friend’s installation of MinGW only included the following in sys/time.h
:
#include <windows.h>
#ifndef _TIMEVAL_DEFINED /* also in winsock[2].h */
#define _TIMEVAL_DEFINED
struct timeval {
long tv_sec;
long tv_usec;
};
#define timerisset(tvp) ((tvp)->tv_sec || (tvp)->tv_usec)
#define timercmp(tvp, uvp, cmp) \
(((tvp)->tv_sec != (uvp)->tv_sec) ? \
((tvp)->tv_sec cmp (uvp)->tv_sec) : \
((tvp)->tv_usec cmp (uvp)->tv_usec))
#define timerclear(tvp) (tvp)->tv_sec = (tvp)->tv_usec = 0
#endif /* _TIMEVAL_DEFINED */
Metaheurística III
El cliente siempre tiene la razón… y nunca sabe lo que quiere.
Balancear esas dos verdades constituye todo un arte.
Metaheurística II
Ninguna aplicación está libre de bugs.
Siempre se puede, a posteriori, alterar algún requerimiento funcional del sistema. Y listo, he allí un bug. Por cierto, al cliente no le importan las precisiones sobre bug, fault, failure, ni los estándares del IEEE al respecto. Esta referencia al cliente nos permite continuar de inmediato hacia la Metaheurística III.
coLinux, int 80 on Windows and other rants
Generally speaking, an Application Binary Interface (ABI) is the interface between an application program and the operating system. Conceptually, it’s related to the more well-known API concept. But ABIs are a low-level notion, while APIs are more leaned toward the application source code level.
Recently, a friend sent me an email exposing some problems he faced when trying to assemble on Cygwin a code originally targeted at Linux. The problem, as he stated, was that int 0x80
didn’t perform as expected. Well, plenty of explanations are pertinent:
Cygwin
Cygwin allows to run a collection of Unix tools on Windows, including the GNU development toolchain. However, at its core, cygwin is a library which translates the POSIX system call API into the pertinent Win32 system calls (system calls are often abbreviated as syscalls). Therefore, cygwin is a software layer between applications using POSIX system calls and the Win32 operating systems, which allows porting some Unix applications to Windows. This way you can, for instance, have the Apache daemon working as a Windows service. Other very attractive feature of Cygwin is its interactive environment: you can run your shell quite nicely, and run your Autoconf scripts, for example. However, porting means recompiling. There is no binary compatibility, and your program cannot run in computers without Cygwin (without CYGWIN1.DLL
, more precisely). Furthermore, albeit some progress has been made, Cygwin is relatively slow (it’s a POSIX compatibility layer, after all.) If possible, I prefer to recompile my applications directly with MinGW. For me, this allows for a faster development cycle. Note, though, that Cygwin can compile MinGW-compatible executables. It’s just that, as aforesaid, I prefer to work with MinGW directly. I only work on Windows if I have to develop applications for Windows. But Linux’s development tools are the best, and we can access several of them by using MinGW. I think that Cygwin is best suited for general cross-development and for handling complicated software porting.
System Calls and int 0x80
A system call is a request by an active process for a service performed by the operating system kernel. Remember that a process is an executing (running) instance of a program, and the active process is the process currently using the CPU. The active process may perform a system call to request creation of other process, for instance. Or perhaps the process needs to communicate with a peripheral device. In Linux on x86, int 0x80
is the assembly language instruction that is used to invoke system calls. int 0x80 is a software interrupt, as it will be raised by a software process, not by hardware devices. Before invoking such interruption, our program has to store the system call number (which allows the operating system to know what service your program is specifically requesting ) in the proper register of the CPU. Every interrupt is a signal to the operating system, notifying it about the occurrence of an event that must be computationally handled.
Continue reading “coLinux, int 80 on Windows and other rants”
hello world, C and GNU as
A thing all these programs had in common was their use of the 09h function of INT 21h for printing the “hello, world!” string. But it’s time to move forward. Now I plan to use the lovely C printf function.
Finally, it’s time to switch to the fabulous GNU as. We’ll forget about DEBUG for some time. Thanks DEBUG. GNU as, Gas, or the GNU Assembler, is obviously the assembler used by the GNU Project. It is part of the Binutils package, and acts as the default back-end of gcc. Gas is very powerful and can target several computer architectures. Quite a program, then. As most assemblers, Gas’ input is comprised of directives (also referred to as Pseudo Ops), comments, and of course, instructions. Instructions are very dependent on the target computer architecture. Conversely, directives tend to be relatively homogeneous.
1 Syntax
Originally, this assembler only accepted the AT&T assembler syntax, even for the Intel x86 and x86-64 architectures. The AT&T syntax is different to the one included in most Intel references. There are several differences, the most memorable being that two-operand instructions have the source and destinations in the opposite order. For example, instruction mov ax, bx
would be expressed in AT&T syntax as movw %bx, %ax
, i.e., the rightmost operand is the destination, and the leftmost one is the source. Other distinction is that register names used as operands must be preceded by a percent (%) sign. However, since version 2.10, Gas supports Intel syntax by means of the .intel_syntax directive. But in the following we’ll be using AT&T syntax.
Software Design, Trials and Errors
Good software design always consider potential scenarios of failure. That’s easier said than done. Frequently, even detecting the failure may be a hard task. That’s other reason why design should always favor software construction based on low-coupled components: theoretically, it should be easier to isolate and identify the part at fault. Now, if a failure occurs, what will the system do? Mask the failure? Inform the user about the failure and ask her for directions? Try to automatically recover from failure? Nice questions, even prettier core dumps.
Today I read a succinct and instructive article by Professor Robert L. Glass, published in Communications of the ACM, Volume 51, Number 6 (2008). Professor Glass is a widely respected expert in the Software Engineering area, and his prose is always very eloquent and a pleasure to read. The specific article is Software Design and the Monkey’s Brain, and it attempts to capture the nature of software design. By the way, if you enjoy that article, you may also like a book by Professor Glass: Software Creativity 2.0, in which he expands on the role of creativity in software engineering and computer programming in general. Essentially, the article Software Design and the Monkey’s Brain deals with two intertwined observations:
- Software Design is a sophisticated trial and error (iterative) activity.
- Such iterative process mostly occurs inside the mind (at the speed of thought).
In the following, I’ll present my own appreciations on this topic. Regarding the first observation, I think that trial and error (I’ve also found the expression trial by error) is the underlying problem-solving approach of every software engineering methodology, like it or not. Alas, there is no algorithmic, perfectly formalized framework for creating software. In his classic book Object-Oriented Analysis and Design, Grady Booch says:
The amateur software engineer is always in search of magic, some sensational method or tool whose application promises to render software development trivial. It is the mark of the professional software engineer to know that no such panacea exists.
I totally agree. Nevertheless, some people dislike this reality. Referring to Software Engineering, a few (theorist) teachers of mine rejected calling it “Engineering”. These people cannot live without “magic”. Indeed, there are significant conceptual differences between software practitioners and some (stubborn) computer scientists, with regards to Software Engineering’s nature. These scientists are not very fond of the trial and error approach. In his article, Professor Glass presents some past investigations which verified that designing software was a trial and error iterative process. He also reflects on the differences in professional perceptions:
This may not have been a terribly acceptable discovery to computer scientists who presumably had hoped for a more algorithmic or prescriptive approach to design, but to software designers in the trenches of practice, it rang a clear and credible bell.
I like to think of software construction as a synthesis process. Specifically, there are two general factors in tension: human factors and artificial factors. The former, mostly informal, the latter, mostly formal. From the conflict, software emerges. Let’s remember that the synthesis solves the conflict between the parts by reconciling their commonalities, in order to form something new. It’s the task of the software designer to conciliate the best of both worlds. Software designers have to evaluate different trade-offs between human and artificial factors.
As a problem-solving activity, software construction is solution-oriented: the ultimate goal of software is to provide a solution to some specific problem. Such solution is evaluated by means of a model of the solution domain. But before arriving to such solution domain model, we have to form the problem domain model. The problem domain model captures the aspects of reality that are relevant to the problem. Later, designers look for a solution, as told, by trial and error. Additionally, the resources available to the designer, including knowledge, are limited. More often than not, empiricism and experience lead the search for a solution. This has an important consequence: software construction is a non-optimal process; we rarely arrive to the best solution (and which is the best solution?).
On its side, knowledge acquisition is other interesting process. During the entire cycle of development, designers have access to an incomplete knowledge. Gradually, designers learn those concepts pertinent to the problem domain model. And, when we are building the problem domain model, it often occurs that the client perspective of the problem changes, and we have to adjust to the new requirements. Interestingly enough, knowledge acquisition is a nonlinear process. Sometimes, a new piece of information may invalidate all our designs, and we must be prepared to start over.