Yes, the client can read the file which is already opened for writing.
But, the problem in reading a file which is currently being written, lies in the consistency of data i.e. Hadoop HDFS does not provide the surety that the data which has been written into the file will be visible to a new reader before the file has been closed.
For this, one can call the hflush operation explicitly which will push all the data in the buffer into write pipeline and then the hflush operation will wait for acknowledgments from the datanodes. Hence, by doing this, the data that has been written into the file before the hflush operation visible to the reader for sure.